BCI Application - moving exponential average and alternate visualization#1380
BCI Application - moving exponential average and alternate visualization#1380gabelerner-kernel wants to merge 13 commits intonvidia-holoscan:mainfrom
Conversation
Greptile SummaryThis PR adds two features to the BCI visualization application: (1) an optional matplotlib-based visualization backend as an alternative to ClaraViz, and (2) an exponential moving average (EMA) high-pass filter to improve real-time signal quality by removing low-frequency drift.
Confidence Score: 3/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[StreamOperator] -->|samples| B[BuildRHSOperator]
B -->|batch| C[NormalizeOperator]
C -->|normalized| D[RegularizedSolverOperator]
D -->|result| E[ConvertToVoxelsOperator]
E -->|"affine_4x4 + hb_voxel_data"| F{viz mode?}
F -->|matplotlib| G[MplVisualizationOperator]
G --> H[PlotServer subprocess]
F -->|clara_viz| I[VoxelStreamToVolumeOp]
I -->|volume| J[VolumeRendererOp]
J -->|color_buffer| K[ColorBufferPassthroughOp]
K -->|color_buffer| L[HolovizOp]
L -->|camera_pose| J
style G fill:#4CAF50,color:#fff
style H fill:#4CAF50,color:#fff
style B fill:#2196F3,color:#fff
Last reviewed commit: cbe5351 |
| def _ensure_mesh_loaded(self) -> Tuple[NDArray[np.float32], NDArray[np.int32]]: | ||
| """Lazy-load the surface mesh.""" | ||
| if self._surface_mesh is None: | ||
| logger.info("Loading surface mesh...") |
There was a problem hiding this comment.
Typo: double space in log message
| } | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Trailing whitespace before closing brace
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
WalkthroughAdds an optional Matplotlib visualization backend and operator, introduces an EMA high-pass filter, changes voxel layout and affine emission in voxelization, updates Dockerfile and requirements for optional matplotlib support, and converts application metadata to a multi-mode format with CLI control for backend selection. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@applications/bci_visualization/metadata.json`:
- Around line 81-90: The "matplotlib" entry's description incorrectly claims
"volume rendering"; update the description string in metadata.json under the
"matplotlib" object to accurately state that matplotlib is used for 2D plotting
(or surface mesh visualization if that's the intended behavior) rather than
volume rendering—locate the "matplotlib" key and edit the "description" value
accordingly so it reflects 2D/surface mesh visualization instead of volume
rendering.
In `@applications/bci_visualization/operators/mpl_visualization_operator.py`:
- Line 96: The f-string prefix on the static title call causes lint F541; in the
mpl visualization operator locate the ax.set_title call (in
mpl_visualization_operator, where ax.set_title(f"HbO Frame") is used) and remove
the unnecessary f prefix so it becomes a plain string literal (ax.set_title("HbO
Frame")).
- Around line 1-2: Add the required SPDX copyright header at the top of the
module mpl_visualization_operator.py (above the existing module docstring) to
satisfy the repository policy; replace or prepend the current triple-quoted
module docstring with the standard header (e.g., an SPDX-License-Identifier line
and the copyright owner/year line) and then keep the existing docstring
immediately after the header so the file begins with the proper license header
followed by the """Visualization operator for HbO voxel data on brain
surface.""" line.
- Around line 102-107: Add a new pytest test file named
test_mpl_visualization_operator.py next to mpl_visualization_operator.py that
mirrors the style of operators/deidentification/pixelator/test_pixelator.py and
covers MplVisualizationOperator: write tests for initialization (instantiating
MplVisualizationOperator), port setup (input/output ports existence and types),
error handling (invalid inputs or missing dependencies raising expected
exceptions), compute logic (feed sample voxel and surface data to compute and
assert produced outputs or side-effects), and edge cases (empty/None voxel
arrays, extreme values, and non-matching mesh/voxel dimensions). Use pytest
fixtures/mocks where needed to simulate matplotlib/fsaverage dependencies and
keep tests isolated and deterministic.
In
`@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py`:
- Around line 116-118: Add pytest unit tests for the EmaHighPass filter that
exercise EmaHighPass.step directly: create tests that pass both NumPy arrays and
CuPy arrays (when cupy is available) to confirm the returned object type matches
the input (numpy.ndarray vs cupy.ndarray) and numerical output matches a
reference implementation; include a test for the first-frame initialization
behavior (when self.ema is None the filter should set self.ema = x and return
the baseline-corrected value accordingly) and tests for a small sequence of
frames to validate the exponential moving average update over time; ensure tests
import and instantiate EmaHighPass (and call _ema_high_pass.step or
EmaHighPass.step) and skip CuPy-specific assertions if CuPy is not installed.
🧹 Nitpick comments (9)
applications/bci_visualization/bci_visualization.py (2)
114-159:holovizoperator is instantiated even when using matplotlib backend.The
holovizoperator (lines 153-159) is created unconditionally but only used in theelsebranch. Whenviz == "matplotlib", this wastes resources and may cause unnecessary initialization.♻️ Move holoviz creation inside the else branch
if self._viz == "matplotlib": from operators.mpl_visualization_operator import MplVisualizationOperator mpl_visualization_operator = MplVisualizationOperator(fragment=self) else: # ========== Visualization Pipeline Operators ========== # Get volume_renderer kwargs from YAML config to extract density range volume_renderer_kwargs = self.kwargs("volume_renderer") density_min = volume_renderer_kwargs.get("density_min", -100.0) density_max = volume_renderer_kwargs.get("density_max", 100.0) voxel_to_volume = VoxelStreamToVolumeOp( ... ) volume_renderer = VolumeRendererOp( ... ) # IMPORTANT changes to avoid deadlocks ... volume_renderer.spec.outputs["color_buffer_out"].condition(ConditionType.NONE) color_buffer_passthrough = ColorBufferPassthroughOp( self, name="color_buffer_passthrough", ) - holoviz = HolovizOp( - self, - name="holoviz", - window_title="Kernel Flow BCI Real-Time Reconstruction and Visualization", - enable_camera_pose_output=True, - cuda_stream_pool=cuda_stream_pool, - ) + holoviz = HolovizOp( + self, + name="holoviz", + window_title="Kernel Flow BCI Real-Time Reconstruction and Visualization", + enable_camera_pose_output=True, + cuda_stream_pool=cuda_stream_pool, + )
52-57: Consider validating thevizparameter.The
vizparameter accepts any string, but only"matplotlib"triggers the matplotlib path—all other values fall through to the clara_viz backend. A typo (e.g.,"matplotib") would silently use clara_viz without warning.🛡️ Add validation for supported backends
+_SUPPORTED_VIZ_BACKENDS = {"clara_viz", "matplotlib"} + class BciVisualizationApp(Application): ... def __init__( self, ... viz: str = "clara_viz", **kwargs, ): + if viz not in _SUPPORTED_VIZ_BACKENDS: + raise ValueError(f"Unsupported visualization backend: {viz!r}. " + f"Supported: {_SUPPORTED_VIZ_BACKENDS}") self._rendering_config = render_config_file self._mask_path = mask_path self._viz = vizapplications/bci_visualization/operators/reconstruction/convert_to_voxels_operator.py (1)
144-155: In-place mutation ofself._affinecould cause subtle issues.The affine correction at line 153 mutates
self._affinedirectly. Ifself._affineis referenced elsewhere before this correction or ifcomputeis called multiple times with different conditions, this could lead to unexpected behavior. Consider creating a corrected copy instead.♻️ Use a corrected copy instead of in-place mutation
if not self._affine_sent: # Mathematical correction: A_new = A_raw * Translation(min_idx) # This maps the 0-indexed volume correctly into world space - R = self._affine[:3, :3] - t = self._affine[:3, 3] - - # The new translation is the world-space position of the local (0,0,0) - corrected_translation = R @ min_idx + t - self._affine[:3, 3] = corrected_translation + corrected_affine = self._affine.copy() + R = corrected_affine[:3, :3] + t = corrected_affine[:3, 3] + corrected_affine[:3, 3] = R @ min_idx + t + self._affine = corrected_affine self._emit_affine_once(op_output)applications/bci_visualization/operators/mpl_visualization_operator.py (4)
83-89: Catch specific exception instead of broadException.Line 85 catches all exceptions but likely only expects
queue.Empty. Use the specific exception for clarity.♻️ Catch queue.Empty specifically
try: item = self.queue.get(timeout=0.1) # small timeout so GUI stays responsive - except Exception: + except mp.queues.Empty: # no data, still allow GUI to process events fig.canvas.flush_events() sleep(0.01) continue
148-155: Silent exception swallowing instop()hinders debugging.The try-except-pass at lines 151-152 silently ignores all exceptions. At minimum, log the exception for debugging purposes.
♻️ Log the exception
def stop(self): try: self._plot_queue.put(None) # sentinel to ask process to exit except Exception: - pass + logger.debug("Failed to send shutdown signal to plot process", exc_info=True) if hasattr(self, "_plot_proc"): self._plot_proc.join(timeout=1.0)
267-272: Catchqueue.Fullspecifically for non-blocking put.Line 270 catches all exceptions but should only catch
queue.Fullfor theput_nowaitcall.♻️ Catch queue.Full specifically
try: # non-blocking put (drop frame if queue is full) self._plot_queue.put_nowait(face_colors) - except Exception: + except mp.queues.Full: # queue full: drop frame to avoid blocking operator logger.debug("Plot queue full — dropping frame")
241-243: Uselogging.exceptionto include traceback.When logging errors with exception context, use
logger.exception()instead oflogger.error()to automatically include the stack trace.♻️ Use logging.exception
except Exception as e: - logger.error("Failed to compute voxel coordinates: %s", e) + logger.exception("Failed to compute voxel coordinates") returnapplications/bci_visualization/utils/reconstruction/ema_high_pass.py (1)
18-26: First-frame output will be zeros due to initialization.On the first call,
self.emais set tox, soreturn x - self.emayields zeros. This is acceptable sinceBuildRHSOperator._apply_baselinealready skips the first frame, but consider documenting this warm-up behavior in the docstring.Also, unit tests for this utility class would help ensure the filter math is correct across edge cases.
📝 Suggested docstring enhancement
def step(self, x): """ Process one sample (or vector of channels). + + Note: On the first call, the output will be zeros as the EMA + is initialized to the input value. """applications/bci_visualization/metadata.json (1)
91-92: Minor: Trailing whitespace on line 92.There appears to be a blank line with potential whitespace before the closing brace. This is a minor formatting issue.
| "matplotlib": { | ||
| "description": "Run the application with matplotlib for volume rendering.", | ||
| "build": { | ||
| "docker_build_args": ["--build-arg", "WITH_MATPLOTLIB=yes"] | ||
| }, | ||
| "run": { | ||
| "command": "python3 bci_visualization.py --renderer_config <holohub_app_source>/config.json --mask_path <holohub_data_dir>/bci_visualization/anatomy_labels_high_res.nii.gz --viz matplotlib", | ||
| "workdir": "holohub_app_source" | ||
| } | ||
| } |
There was a problem hiding this comment.
Description is inaccurate - matplotlib doesn't perform volume rendering.
The description states "matplotlib for volume rendering" but matplotlib is a 2D plotting library. Based on the PR summary mentioning "surface mesh rendering", consider updating the description to accurately reflect the visualization approach.
📝 Suggested fix
"matplotlib": {
- "description": "Run the application with matplotlib for volume rendering.",
+ "description": "Run the application with matplotlib for surface mesh visualization.",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "matplotlib": { | |
| "description": "Run the application with matplotlib for volume rendering.", | |
| "build": { | |
| "docker_build_args": ["--build-arg", "WITH_MATPLOTLIB=yes"] | |
| }, | |
| "run": { | |
| "command": "python3 bci_visualization.py --renderer_config <holohub_app_source>/config.json --mask_path <holohub_data_dir>/bci_visualization/anatomy_labels_high_res.nii.gz --viz matplotlib", | |
| "workdir": "holohub_app_source" | |
| } | |
| } | |
| "matplotlib": { | |
| "description": "Run the application with matplotlib for surface mesh visualization.", | |
| "build": { | |
| "docker_build_args": ["--build-arg", "WITH_MATPLOTLIB=yes"] | |
| }, | |
| "run": { | |
| "command": "python3 bci_visualization.py --renderer_config <holohub_app_source>/config.json --mask_path <holohub_data_dir>/bci_visualization/anatomy_labels_high_res.nii.gz --viz matplotlib", | |
| "workdir": "holohub_app_source" | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@applications/bci_visualization/metadata.json` around lines 81 - 90, The
"matplotlib" entry's description incorrectly claims "volume rendering"; update
the description string in metadata.json under the "matplotlib" object to
accurately state that matplotlib is used for 2D plotting (or surface mesh
visualization if that's the intended behavior) rather than volume
rendering—locate the "matplotlib" key and edit the "description" value
accordingly so it reflects 2D/surface mesh visualization instead of volume
rendering.
| """Visualization operator for HbO voxel data on brain surface.""" | ||
|
|
There was a problem hiding this comment.
Copyright header missing or incorrectly formatted.
The pipeline failed with: "Copyright header missing or formatted incorrectly (manual fix required)". The current docstring should be replaced with or preceded by the proper SPDX copyright header.
📝 Add proper copyright header
-"""Visualization operator for HbO voxel data on brain surface."""
+"""
+SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES.
+SPDX-License-Identifier: Apache-2.0
+
+Visualization operator for HbO voxel data on brain surface.
+"""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Visualization operator for HbO voxel data on brain surface.""" | |
| """ | |
| SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. | |
| SPDX-License-Identifier: Apache-2.0 | |
| Visualization operator for HbO voxel data on brain surface. | |
| """ | |
🤖 Prompt for AI Agents
In `@applications/bci_visualization/operators/mpl_visualization_operator.py`
around lines 1 - 2, Add the required SPDX copyright header at the top of the
module mpl_visualization_operator.py (above the existing module docstring) to
satisfy the repository policy; replace or prepend the current triple-quoted
module docstring with the standard header (e.g., an SPDX-License-Identifier line
and the copyright owner/year line) and then keep the existing docstring
immediately after the header so the file begins with the proper license header
followed by the """Visualization operator for HbO voxel data on brain
surface.""" line.
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
| class MplVisualizationOperator(Operator): | ||
| """Operator that visualizes HbO voxel data on a brain surface. | ||
|
|
||
| This operator projects voxel data onto a fsaverage surface mesh | ||
| and renders it using matplotlib's Poly3DCollection for real-time updates. | ||
| """ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for test files in the operator directory
echo "=== Searching for test files in operators directory ==="
fd -t f '.*test.*\.py' applications/bci_visualization/operators/ 2>/dev/null || echo "No test files found with pattern '*test*.py'"
echo ""
echo "=== Listing all files in operators directory ==="
fd -t f '' applications/bci_visualization/operators/ 2>/dev/null | head -20
echo ""
echo "=== Checking for conftest.py ==="
fd -t f 'conftest.py' applications/bci_visualization/operators/ 2>/dev/null || echo "No conftest.py found"
echo ""
echo "=== Checking directory structure ==="
find applications/bci_visualization/operators/ -maxdepth 2 -type f -name "*.py" 2>/dev/null | sortRepository: nvidia-holoscan/holohub
Length of output: 2351
🏁 Script executed:
#!/bin/bash
# Check for test files in other operators directories
echo "=== Searching for test files across all operators directories ==="
fd -t f 'test_.*\.py' '*/operators/**/' 2>/dev/null | head -20
echo ""
echo "=== Searching for conftest.py in any operators directory ==="
fd -t f 'conftest.py' '*/operators/**/' 2>/dev/null
echo ""
echo "=== Checking overall test structure ==="
find . -path "*/operators/*test*.py" -type f 2>/dev/null | head -10
echo ""
echo "=== Checking if operators have tests in separate test directories ==="
find . -path "*/operators/*/test*" -type d 2>/dev/null | head -10Repository: nvidia-holoscan/holohub
Length of output: 1028
🏁 Script executed:
#!/bin/bash
# Check structure of operator with test file
echo "=== pixelator operator with test ==="
ls -la operators/deidentification/pixelator/ | head -20
echo ""
echo "=== Check if test_pixelator.py is in same directory ==="
cat operators/deidentification/pixelator/test_pixelator.py | head -20
echo ""
echo "=== Check other Python operators in applications without tests ==="
ls -la applications/bci_visualization/operators/reconstruction/*.py | wc -lRepository: nvidia-holoscan/holohub
Length of output: 1490
Add unit tests for MplVisualizationOperator.
Python operators must include unit tests in the same directory using pytest framework. Add test_mpl_visualization_operator.py with tests covering initialization, port setup, error handling, compute logic, and edge cases. Reference operators/deidentification/pixelator/test_pixelator.py for a similar example.
🤖 Prompt for AI Agents
In `@applications/bci_visualization/operators/mpl_visualization_operator.py`
around lines 102 - 107, Add a new pytest test file named
test_mpl_visualization_operator.py next to mpl_visualization_operator.py that
mirrors the style of operators/deidentification/pixelator/test_pixelator.py and
covers MplVisualizationOperator: write tests for initialization (instantiating
MplVisualizationOperator), port setup (input/output ports existence and types),
error handling (invalid inputs or missing dependencies raising expected
exceptions), compute logic (feed sample voxel and surface data to compute and
assert produced outputs or side-effects), and edge cases (empty/None voxel
arrays, extreme values, and non-matching mesh/voxel dimensions). Use pytest
fixtures/mocks where needed to simulate matplotlib/fsaverage dependencies and
keep tests isolated and deterministic.
| # apply EMA high-pass filter | ||
| realtime_moments = self._ema_high_pass.step(realtime_moments) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find tests and implementation of EmaHighPass
# Search for EmaHighPass test files
echo "=== Searching for EmaHighPass tests ==="
rg -l "EmaHighPass" --type py | grep -i test
# Check if ema_high_pass.py exists and show its content
echo "=== Checking EmaHighPass implementation ==="
find . -name "ema_high_pass.py" -type fRepository: nvidia-holoscan/holohub
Length of output: 221
🏁 Script executed:
#!/bin/bash
# Get detailed implementation of ema_high_pass.py
cat applications/bci_visualization/utils/reconstruction/ema_high_pass.pyRepository: nvidia-holoscan/holohub
Length of output: 711
🏁 Script executed:
#!/bin/bash
# Check if there are any CuPy imports in the module or related files
echo "=== Checking for CuPy references in reconstruction utilities ==="
rg "cupy|cp\." applications/bci_visualization/utils/reconstruction/ --type py
# Check build_rhs_operator for context
echo "=== Context from build_rhs_operator.py around lines 116-118 ==="
sed -n '100,130p' applications/bci_visualization/operators/reconstruction/build_rhs_operator.pyRepository: nvidia-holoscan/holohub
Length of output: 3662
Add unit tests for EmaHighPass filter to verify CuPy array compatibility.
The EmaHighPass class uses standard Python arithmetic operations that are compatible with CuPy arrays, ensuring operations remain on GPU. However, there are no unit tests for this critical utility class. Per coding guidelines, unit tests using pytest must be included for code contributions. Add tests to verify the filter works correctly with both NumPy and CuPy arrays, including edge cases such as the first frame initialization where self.ema = x, ensuring the baseline computation behavior is correct.
🤖 Prompt for AI Agents
In
`@applications/bci_visualization/operators/reconstruction/build_rhs_operator.py`
around lines 116 - 118, Add pytest unit tests for the EmaHighPass filter that
exercise EmaHighPass.step directly: create tests that pass both NumPy arrays and
CuPy arrays (when cupy is available) to confirm the returned object type matches
the input (numpy.ndarray vs cupy.ndarray) and numerical output matches a
reference implementation; include a test for the first-frame initialization
behavior (when self.ema is None the filter should set self.ema = x and return
the baseline-corrected value accordingly) and tests for a small sequence of
frames to validate the exponential moving average update over time; ensure tests
import and instantiate EmaHighPass (and call _ema_high_pass.step or
EmaHighPass.step) and skip CuPy-specific assertions if CuPy is not installed.
| def step(self, x): | ||
| """ | ||
| Process one sample (or vector of channels). | ||
| """ | ||
| if self.ema is None: | ||
| self.ema = x | ||
| else: | ||
| self.ema = self.alpha * x + (1 - self.alpha) * self.ema | ||
| return x - self.ema No newline at end of file |
There was a problem hiding this comment.
Filter doesn't preserve CuPy array type. When x is a CuPy array from GPU operations, self.ema stores it directly, but subsequent operations expect CuPy arrays. The subtraction x - self.ema works, but if x changes type between calls, this will cause issues.
Consider explicitly handling CuPy arrays:
def step(self, x):
"""
Process one sample (or vector of channels).
"""
if self.ema is None:
self.ema = x.copy() if hasattr(x, 'copy') else x
else:
self.ema = self.alpha * x + (1 - self.alpha) * self.ema
return x - self.ema
Additional Comments (1)
|
| """Visualization operator for HbO voxel data on brain surface.""" | ||
|
|
| self._plot_queue.put_nowait(face_colors) | ||
| except Exception: | ||
| # queue full: drop frame to avoid blocking operator | ||
| logger.debug("Plot queue full — dropping frame") |
There was a problem hiding this comment.
| logger.debug("Plot queue full — dropping frame") | |
| logger.debug("Plot queue full — dropping frame") | |
| @@ -0,0 +1,2 @@ | |||
| matplotlib==3.10.8 | |||
| nilearn==0.13.0 No newline at end of file | |||
There was a problem hiding this comment.
| nilearn==0.13.0 | |
| nilearn==0.13.0 | |
| scipy==1.16.3 | ||
| h5py==3.15.1 | ||
| kernel-sdk==6.6.0 | ||
| kernel-sdk==6.6.0 No newline at end of file |
There was a problem hiding this comment.
| kernel-sdk==6.6.0 | |
| kernel-sdk==6.6.0 | |
| self.ema = x | ||
| else: | ||
| self.ema = self.alpha * x + (1 - self.alpha) * self.ema | ||
| return x - self.ema No newline at end of file |
There was a problem hiding this comment.
| return x - self.ema | |
| return x - self.ema | |
|
|
||
| # Install optional matplotlib dependencies | ||
| COPY applications/bci_visualization/requirements-matplotlib.txt /tmp/requirements-matplotlib.txt | ||
| RUN if [ "$WITH_MATPLOTLIB" = "yes" ]; then pip install -r /tmp/requirements-matplotlib.txt --no-cache-dir; fi No newline at end of file |
There was a problem hiding this comment.
| RUN if [ "$WITH_MATPLOTLIB" = "yes" ]; then pip install -r /tmp/requirements-matplotlib.txt --no-cache-dir; fi | |
| RUN if [ "$WITH_MATPLOTLIB" = "yes" ]; then pip install -r /tmp/requirements-matplotlib.txt --no-cache-dir; fi | |
| # Install python3-tk (needed for matplotlib GUI support) | ||
| ARG WITH_MATPLOTLIB=no | ||
| RUN if [ "$WITH_MATPLOTLIB" = "yes" ]; then apt-get update && apt-get install -y python3-tk && rm -rf /var/lib/apt/lists/*; fi | ||
|
|
There was a problem hiding this comment.
It would be great to move this to the end to have all matplotlib related dependencies in one place. In this way, both images, with and without matplotlib, can share the initial cached docker layers. Thanks
| """ | ||
| from math import pi | ||
|
|
||
| FS = 4.76 # Streaming frequency of Kernel Flow in Hz |
There was a problem hiding this comment.
Indentation inconsistency: space before FS = 4.76 on line 7
| FS = 4.76 # Streaming frequency of Kernel Flow in Hz | |
| FS = 4.76 # Streaming frequency of Kernel Flow in Hz |
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| def _ensure_mesh_loaded(self) -> Tuple[NDArray[np.float32], NDArray[np.int32]]: | ||
| """Lazy-load the surface mesh.""" | ||
| if self._surface_mesh is None: | ||
| logger.info("Loading surface mesh...") |
There was a problem hiding this comment.
Double space in log message: "Loading surface mesh..."
| logger.info("Loading surface mesh...") | |
| logger.info("Loading surface mesh...") |
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | ||
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | ||
| normalized = clipped / hb_abs_max |
There was a problem hiding this comment.
Division by zero risk when hb_abs_max equals zero (can occur if all face values are zero)
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | |
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | |
| normalized = clipped / hb_abs_max | |
| # Normalize to [-1, 1] for colormap, centered at 0 | |
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | |
| if hb_abs_max == 0: | |
| hb_abs_max = 1.0 # prevent division by zero | |
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | |
| normalized = clipped / hb_abs_max |
Additional Comments (1)
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@applications/bci_visualization/operators/mpl_visualization_operator.py`:
- Line 159: Minor typo in the log message: update the logger.info call in
mpl_visualization_operator.py (the logger.info call that currently logs "Loading
surface mesh...") to remove the double space so it reads "Loading surface
mesh..."; locate the logger.info invocation in the MPLVisualizationOperator (or
the function where logger.info is used) and correct the string literal to a
single space.
- Around line 3-26: Reorder the imports in mpl_visualization_operator.py into
the standard groups: 1) from __future__ imports, 2) standard library (e.g.,
logging, multiprocessing as mp, time functions perf_counter and sleep, typing
Any/Tuple), 3) third-party packages alphabetized (e.g., matplotlib.pyplot as
plt, mpl_toolkits.mplot3d.art3d.Poly3DCollection, numpy as np and
numpy.typing.NDArray, scipy.ndimage, nilearn.datasets and nilearn.surface), and
4) local/project imports (holoscan.core symbols). Within each group sort
alphabetically and keep related multi-name imports together; ensure the existing
top-level names like logger remain defined after the imports.
- Around line 274-277: Guard against dividing by zero when computing normalized:
compute hb_abs_max from hb_min and hb_max as currently done, but if hb_abs_max
is zero (or extremely small) avoid dividing by zero by substituting a small
epsilon or returning zeros; update the block around hb_abs_max, clipped,
normalized so that when hb_abs_max == 0 you either set normalized =
np.zeros_like(face_values) or use hb_abs_max = max(hb_abs_max,
np.finfo(face_values.dtype).eps) before dividing (refer to hb_min, hb_max,
face_values, clipped, normalized).
🧹 Nitpick comments (5)
applications/bci_visualization/operators/mpl_visualization_operator.py (5)
31-51: Consider adding error handling for mesh loading.
datasets.fetch_surf_fsaveragefetches data from the network on first use. If the fetch fails (network unavailable, corrupted cache), this will raise an unhandled exception during operator start. Consider wrapping with try/except and providing a more informative error message.🛡️ Suggested improvement
def _load_surface_mesh() -> Tuple[NDArray[np.float32], NDArray[np.int32]]: """Load the fsaverage surface mesh using nilearn. ... """ - fsaverage = datasets.fetch_surf_fsaverage('fsaverage4') + try: + fsaverage = datasets.fetch_surf_fsaverage('fsaverage4') + except Exception as e: + raise RuntimeError( + f"Failed to fetch fsaverage4 surface mesh. Ensure network access " + f"or pre-cache the data: {e}" + ) from e
82-88: Catch specific exception for queue timeout.Static analysis flagged catching blind
Exception. Thequeue.get(timeout=...)raisesqueue.Emptyon timeout, so catch that specifically to avoid masking unexpected errors.🔧 Proposed fix
+from queue import Empty + ... try: item = self.queue.get(timeout=0.1) # small timeout so GUI stays responsive - except Exception: + except Empty: # no data, still allow GUI to process events fig.canvas.flush_events() sleep(0.01) continue
148-154: Log exception instead of silently passing.Static analysis flagged
try-except-pass(S110). Ifput()fails, logging the reason helps with debugging.🔧 Proposed fix
def stop(self): try: self._plot_queue.put(None) # sentinel to ask process to exit - except Exception: - pass + except Exception as e: + logger.debug("Failed to send shutdown signal to plot process: %s", e) if hasattr(self, "_plot_proc"): self._plot_proc.join(timeout=1.0)
241-243: Uselogging.exceptionto include traceback.Static analysis flagged TRY400: when logging errors from exception handlers,
logging.exceptionautomatically includes the traceback, which aids debugging.🔧 Proposed fix
except Exception as e: - logger.error("Failed to compute voxel coordinates: %s", e) + logger.exception("Failed to compute voxel coordinates: %s", e) return
130-131: Consider usingcollections.dequefor rolling window.Using
list.pop(0)is O(n) per removal. For a rolling window,collections.deque(maxlen=HB_AVERAGE_WINDOW)is O(1) and auto-discards old items.♻️ Suggested improvement
+from collections import deque + ... - self._hb_min: list[float] = [] - self._hb_max: list[float] = [] + self._hb_min: deque[float] = deque(maxlen=HB_AVERAGE_WINDOW) + self._hb_max: deque[float] = deque(maxlen=HB_AVERAGE_WINDOW)Then simplify the append/pop logic:
self._hb_min.append(hb_min) - if len(self._hb_min) > HB_AVERAGE_WINDOW: - self._hb_min.pop(0) hb_min = np.mean(self._hb_min) hb_max = np.percentile(face_values, 98) self._hb_max.append(hb_max) - if len(self._hb_max) > HB_AVERAGE_WINDOW: - self._hb_max.pop(0) hb_max = np.mean(self._hb_max)Also applies to: 263-272
| from __future__ import annotations | ||
|
|
||
| import logging | ||
| import scipy.ndimage | ||
| from mpl_toolkits.mplot3d.art3d import Poly3DCollection | ||
| from time import perf_counter, sleep | ||
| from typing import Any, Tuple | ||
| import matplotlib.pyplot as plt | ||
| import multiprocessing as mp | ||
|
|
||
| import numpy as np | ||
| from numpy.typing import NDArray | ||
| from nilearn import datasets, surface | ||
|
|
||
| from holoscan.core import ( | ||
| ConditionType, | ||
| ExecutionContext, | ||
| InputContext, | ||
| Operator, | ||
| OperatorSpec, | ||
| OutputContext, | ||
| ) | ||
|
|
||
| logger = logging.getLogger(__name__) |
There was a problem hiding this comment.
Imports are incorrectly sorted and/or formatted.
The pipeline flagged import sorting issues. Imports should be grouped: __future__, standard library, third-party packages, then local imports, with each group alphabetically sorted.
🔧 Fix import ordering
from __future__ import annotations
import logging
-import scipy.ndimage
-from mpl_toolkits.mplot3d.art3d import Poly3DCollection
-from time import perf_counter, sleep
-from typing import Any, Tuple
-import matplotlib.pyplot as plt
import multiprocessing as mp
+from time import perf_counter, sleep
+from typing import Any, Tuple
+import matplotlib.pyplot as plt
import numpy as np
+import scipy.ndimage
from numpy.typing import NDArray
+from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from nilearn import datasets, surface
from holoscan.core import (
ConditionType,
ExecutionContext,
InputContext,
Operator,
OperatorSpec,
OutputContext,
)🤖 Prompt for AI Agents
In `@applications/bci_visualization/operators/mpl_visualization_operator.py`
around lines 3 - 26, Reorder the imports in mpl_visualization_operator.py into
the standard groups: 1) from __future__ imports, 2) standard library (e.g.,
logging, multiprocessing as mp, time functions perf_counter and sleep, typing
Any/Tuple), 3) third-party packages alphabetized (e.g., matplotlib.pyplot as
plt, mpl_toolkits.mplot3d.art3d.Poly3DCollection, numpy as np and
numpy.typing.NDArray, scipy.ndimage, nilearn.datasets and nilearn.surface), and
4) local/project imports (holoscan.core symbols). Within each group sort
alphabetically and keep related multi-name imports together; ensure the existing
top-level names like logger remain defined after the imports.
| def _ensure_mesh_loaded(self) -> Tuple[NDArray[np.float32], NDArray[np.int32]]: | ||
| """Lazy-load the surface mesh.""" | ||
| if self._surface_mesh is None: | ||
| logger.info("Loading surface mesh...") |
There was a problem hiding this comment.
Minor typo: double space in log message.
🔧 Fix typo
- logger.info("Loading surface mesh...")
+ logger.info("Loading surface mesh...")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.info("Loading surface mesh...") | |
| logger.info("Loading surface mesh...") |
🤖 Prompt for AI Agents
In `@applications/bci_visualization/operators/mpl_visualization_operator.py` at
line 159, Minor typo in the log message: update the logger.info call in
mpl_visualization_operator.py (the logger.info call that currently logs "Loading
surface mesh...") to remove the double space so it reads "Loading surface
mesh..."; locate the logger.info invocation in the MPLVisualizationOperator (or
the function where logger.info is used) and correct the string literal to a
single space.
| # Normalize to [-1, 1] for colormap, centered at 0 | ||
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | ||
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | ||
| normalized = clipped / hb_abs_max |
There was a problem hiding this comment.
Guard against division by zero.
If all face_values are zero (or very close), hb_abs_max will be zero, causing division by zero.
🐛 Proposed fix
# Normalize to [-1, 1] for colormap, centered at 0
hb_abs_max = max(abs(hb_min), abs(hb_max))
+ if hb_abs_max < 1e-10:
+ # No meaningful variation - use neutral gray
+ face_colors = np.full((len(face_values), 4), [0.5, 0.5, 0.5, 1.0])
+ else:
- clipped = np.clip(face_values, -hb_abs_max, hb_abs_max)
- normalized = clipped / hb_abs_max
+ clipped = np.clip(face_values, -hb_abs_max, hb_abs_max)
+ normalized = clipped / hb_abs_max
- # Shift to [0, 1] for colormap
- normalized = (normalized + 1) / 2
+ # Shift to [0, 1] for colormap
+ normalized = (normalized + 1) / 2
- face_colors = self._colormap(normalized)
+ face_colors = self._colormap(normalized)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Normalize to [-1, 1] for colormap, centered at 0 | |
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | |
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | |
| normalized = clipped / hb_abs_max | |
| # Normalize to [-1, 1] for colormap, centered at 0 | |
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | |
| if hb_abs_max < 1e-10: | |
| # No meaningful variation - use neutral gray | |
| face_colors = np.full((len(face_values), 4), [0.5, 0.5, 0.5, 1.0]) | |
| else: | |
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | |
| normalized = clipped / hb_abs_max | |
| # Shift to [0, 1] for colormap | |
| normalized = (normalized + 1) / 2 | |
| face_colors = self._colormap(normalized) |
🤖 Prompt for AI Agents
In `@applications/bci_visualization/operators/mpl_visualization_operator.py`
around lines 274 - 277, Guard against dividing by zero when computing
normalized: compute hb_abs_max from hb_min and hb_max as currently done, but if
hb_abs_max is zero (or extremely small) avoid dividing by zero by substituting a
small epsilon or returning zeros; update the block around hb_abs_max, clipped,
normalized so that when hb_abs_max == 0 you either set normalized =
np.zeros_like(face_values) or use hb_abs_max = max(hb_abs_max,
np.finfo(face_values.dtype).eps) before dividing (refer to hb_min, hb_max,
face_values, clipped, normalized).
| # Normalize to [-1, 1] for colormap, centered at 0 | ||
| hb_abs_max = max(abs(hb_min), abs(hb_max)) | ||
| clipped = np.clip(face_values, -hb_abs_max, hb_abs_max) | ||
| normalized = clipped / hb_abs_max |
There was a problem hiding this comment.
Should we normalize frames by a global max to make frames comparable?
Additional Comments (1)
This should be moved inside the |
|
@gabelerner-kernel could you please address the comments on this PR? Thanks |
compressed_PXL_20260128_211034806.TS.mp4
compressed_PXL_20260128_220833446.TS.mp4
Summary by CodeRabbit
New Features
Chores