feat: Live segmentation with real-time preview and blur metrics#845
Open
feat: Live segmentation with real-time preview and blur metrics#845
Conversation
- Add progress MQTT message for proper pump timing synchronization - Add live.py for real-time segmentation during acquisition - Load pixel size from hardware.json for calibration consistency - Add feature documentation with screenshots>
sonnyp
reviewed
Jan 28, 2026
Collaborator
sonnyp
left a comment
There was a problem hiding this comment.
Yey!
Can you please split this into multiple PRs so that
- It is easier to review (looking through the changes and test fixes/features)
- Each "atomic" change (a fix or a new feature) has its own commit (useful for reverts, history, ...)
I propose to start with one small PR for example "H.264 Video Stream Corruption (RPi5)" or "Video Stream Not Working Over WiFi Hotspot" so that we can make sure we have a good workflow for the next ones.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PlanktoScope Update Dashboard - Release Documentation
Date: 2026-01-26 (Updated)
Base Repository/Branch: PlanktoScope Dashboard 2.0 - update-dashboard
Author: Adam Larson
This release implements two feature enhancements, along with bug fixes identified during deployment testing on Raspberry Pi 5 hardware. The changes enable real-time plankton detection during sample acquisition and provide quantitative focus quality metrics for quality assurance.
Changes Overview
PR #1: Live Segmentation Feature
Problem
Operators had no real-time feedback during sample acquisition. They could only assess sample quality and object detection after the acquisition completed, leading to wasted time on poorly focused or improperly positioned samples.
Solution Implemented
A real-time segmentation system that processes each captured frame during acquisition, overlays detected objects on a live preview, and provides immediate visual feedback to operators.
Architecture
Files Added
1.
segmenter/planktoscope/segmenter/live.py(NEW - 450 lines)Purpose: Backend process for real-time frame segmentation.
Key Implementation Details:
Core Methods:
segment_single_frame(img)_create_simple_mask(img)_load_pixel_size()process_pixel_fixedfrom/home/pi/PlanktoScope/hardware.json_esd_um_to_min_area(esd_um)_encode_mask_png(mask)_is_static_object(bbox)Pixel Size Calibration:
The live segmenter reads pixel size from
/home/pi/PlanktoScope/hardware.json(process_pixel_fixedfield) to ensure consistency with the calibration dashboard. Falls back to 0.75 µm/pixel if config unavailable.Improved Static Object Detection Algorithm:
The system tracks object positions across frames to identify debris stuck on the flow cell glass:
Rationale for Parameters:
Performance Limits:
2.
segmenter/main.py(MODIFIED)Change: Added live segmenter process initialization.
Rationale: Separate process ensures live preview doesn't impact main segmentation performance during post-acquisition batch processing.
3.
frontend/src/pages/preview/segmentation/index.jsx(NEW - 482 lines)Purpose: Live Segmentation Visualization page embedded in Node-RED dashboard.
State Management:
Overlay Rendering:
Design Decision - Cyan Color Scheme:
rgb(0, 128, 128)(teal)rgb(0, 220, 220)(cyan) at 35% opacity4.
frontend/src/pages/preview/segmentation/styles.module.css(NEW - 352 lines)CSS module providing dark theme styling consistent with Node-RED dashboard.
5.
lib/scope.js(MODIFIED)Added Functions:
MQTT Interface
segmenter/live{"action": "start", "overlay": "bbox", "min_esd_um": 20, "remove_static": true}segmenter/live{"action": "stop"}status/segmenter/live{"status": "Enabled", "overlay": "bbox"}status/segmenter/live{"objects": [...], "frame_blur": 45.2, "image": "base64...", "image_width": 4056, "image_height": 3040}PR #2: Blur/Focus Quality Metric
Problem Statement
Operators needed quantitative feedback on image focus quality to:
Solution Implemented
A Laplacian variance-based blur metric with real-time visualization including sparkline trending and optional spatial heatmap overlay.
Technical Background
Laplacian Variance Method:
The Laplacian operator detects edges by computing the second derivative of image intensity. Sharp images have strong edges (high variance), while blurry images have weak edges (low variance).
Calibration for PlanktoScope Optics:
Empirical testing with the PlanktoScope optical system established these thresholds:
Rationale: These thresholds were calibrated against manual focus assessment by trained operators using the specific optical configuration (IMX477 sensor, 25mm/12mm lens configuration).
Files Modified
1.
segmenter/planktoscope/segmenter/operations.py(MODIFIED)Added Functions:
Regional Blur Rationale:
A 4×4 grid provides sufficient spatial resolution to detect:
2.
segmenter/planktoscope/segmenter/live.py(MODIFIED)Integration:
3.
frontend/src/pages/preview/segmentation/index.jsx(MODIFIED)Added State:
Sparkline Implementation:
Heatmap Rendering:
UI Layout:
The blur visualization appears as a floating overlay panel in the bottom-right corner of the segmentation preview, containing:
EcoTaxa Integration
Per-object blur is exported to the TSV file for post-acquisition quality filtering:
This enables researchers to filter the dataset by focus quality during analysis.
Bug Fix: H.264 Video Stream Corruption (RPi5)
Problem Statement
Severe video corruption manifested as blocky pixelization artifacts on Raspberry Pi 5 hardware. The live preview stream was unustable, displaying what appeared to be DivX-era compression artifacts.
Root Cause Analysis
Finding 1: Resolution Misalignment
The original RPi5 preview resolution
2028×1520violates H.264 macroblock alignment requirements:The RPi4 resolution
1440×1080was properly alignedFinding 2: B-Frame Incompatibility
Research identified that WebRTC (used by MediaMTX for browser streaming) does not support H.264 B-frames:
References:
Changes Applied
1.
controller/camera/hardware.pyResolution Fix (Line 21):
(2028, 1520)(1920, 1440)Rationale for 1920×1440:
Encoder Configuration (Lines 287-301):
Parameter Rationale:
profile="baseline"repeat=Trueiperiod=152.
os/mediamtx/mediamtx.ymlAdded:
Rationale: The RPi5 software encoder can produce data faster than the default buffer allows, causing packet drops. Increased buffer prevents overflow.
Outcome
Status: Partial Success
The changes significantly reduced but did not completely eliminate stream artifacts. Remaining issues are likely related to:
Recommendation: Consider hardware-accelerated encoding solutions or alternative streaming protocols (MJPEG) for production deployment.
Bug Fix: Calibration Settings Persistence
Problem Statement
Calibration settings (pixel size, white balance gains, pump steps/mL) were lost on system restart, requiring operators to recalibrate after each power cycle.
Root Cause
Node-RED's
localfilesystemcontext storage module flushes to disk every 30 seconds by default. If the system restarts within this window, unsaved calibration data is lost.Solution
node-red/settings.cjs(Lines 341-345)Calibration Data Protected
calibration_pixel_sizecalibration_scale_factorcalibration_wbg_redcalibration_wbg_bluecalibration_nb_stepcalibration_markerA_*calibration_markerB_*Trade-offs
Bug Fix #3: Video Stream Not Working Over WiFi Hotspot
Problem Statement
Video streams work over ethernet but fail when connecting via the PlanktoScope's WiFi hotspot. The UI loads correctly but the video stream shows a spinning wheel indefinitely.
Root Cause
WebRTC ICE candidate gathering fails on the hotspot network because:
Solution
Enabled
webrtcAdditionalHostsin MediaMTX configuration to explicitly advertise local IPs as valid ICE candidates, bypassing the need for STUN:Files Modified
os/mediamtx/mediamtx.ymlwebrtcAdditionalHostsconfiguration/usr/local/etc/mediamtx.yml(on device)Bug Fix #4: White Balance and LED Intensity Not Persisting
Problem Statement
White balance (red/blue gains) and LED intensity calibration settings were lost after system restart, even though the flushInterval fix (Bug Fix #2) was applied.
Root Cause
The flushInterval fix only addressed Node-RED context storage. However:
hardware.jsonhardware.jsoninto Node-RED contexthardware.json, but Node-RED only saved to its own contextSolution
Implemented a complete persistence pipeline:
hardware.jsonwhen changedhardware.jsoninto Node-RED contextFiles Modified
node-red/projects/dashboard/flows.jsondefault-configs/v3.0.hardware.jsonled_intensityfieldData Flow
Deployment Summary
Files Added (New)
segmenter/planktoscope/segmenter/live.pyfrontend/src/pages/preview/segmentation/index.jsxfrontend/src/pages/preview/segmentation/styles.module.cssfrontend/src/pages/preview/SegmentationOverlay.jsxfrontend/src/pages/preview/SegmentationOverlay.module.cssFiles Modified
segmenter/main.pysegmenter/planktoscope/segmenter/operations.pycalculate_blur(),calculate_regional_blur()segmenter/planktoscope/segmenter/__init__.pysegment_single_frame()helperlib/scope.jsstartLiveSegmentation(),stopLiveSegmentation()controller/camera/hardware.pyos/mediamtx/mediamtx.ymlwriteQueueSize: 1024, addedwebrtcAdditionalHostsnode-red/settings.cjsflushInterval: 5node-red/projects/dashboard/flows.jsondefault-configs/v3.0.hardware.jsonled_intensityfieldDeployment Commands
Testing Checklist
Live Segmentation
Blur Metric
object_blur_laplaciancolumnVideo Stream (RPi5)
Video Stream Over WiFi Hotspot
Calibration Persistence (Node-RED Context)
White Balance & LED Intensity Persistence (hardware.json)
hardware.jsoncontains updatedred_gainandblue_gainhardware.jsoncontains updatedled_intensityKnown Limitations
.localhostnames for most reliable connectivityBug Fix #5: Motion Blur from Pump Synchronization Race Condition
Problem Statement
During stop-flow acquisition, approximately 1 in every 2-3 captured images showed motion blur, even with adequate stabilization delay. Images were captured while the pump was still running.
Root Cause
Race condition in MQTT pump synchronization: When starting a new pump cycle, stale "Done" messages from the previous cycle would trigger
_done.set()before the new pump completed, causing capture to happen prematurely.Solution
Modified
_receive_messages()incontroller/imager/main.pyto check if we're actually waiting for a "Done" message before processing it:Outcome
All captures now occur after proper stabilization. No more intermittent motion blur.
Bug Fix #6: Static Object Detection Improvements
Problem Statement
Debris stuck on flow cell glass was not being filtered during live segmentation despite the static object removal feature existing.
Root Cause
remove_staticdefaulted toFalse(disabled)Solution
remove_staticdefaultFalseTruegrid_sizestatic_thresholdOutcome
Debris on glass is now filtered after appearing in the same position for 2 consecutive frames.
Bug Fix #7: Stabilization Time Increased
Problem Statement
Default 0.5 second stabilization was insufficient for particles to fully settle after pump motion.
Solution
Changed
sleepparameter in Node-RED flows from0.5to1.0seconds.Outcome
Adequate settling time for most samples. Adds ~0.5s per frame to acquisition time.
Appendix: Blur Metric Scientific Basis
The Laplacian variance method is a well-established focus measure in computer vision literature:
Method: Computes the variance of the Laplacian (second derivative) of image intensity.
Mathematical Basis:
Where
∇²Iis the Laplacian of imageI.Properties:
Reference: Pech-Pacheco, J.L., et al. "Diatom autofocusing in brightfield microscopy: a comparative study." ICPR 2000.