Skip to content

Depthai1#9

Open
tqmsh wants to merge 9 commits into
mainfrom
depthai1
Open

Depthai1#9
tqmsh wants to merge 9 commits into
mainfrom
depthai1

Conversation

@tqmsh
Copy link
Copy Markdown
Member

@tqmsh tqmsh commented Apr 1, 2026

7d32e3b3636a45fee1cf5b6e01c8bd375350410b81398c16b2e10cf4db4a7fbc

Migrate OAK-D pipeline to DepthAI 3.x (Pipeline 2.0)

Reference docs

Summary

  • Rewrote the full on-device pipeline from depthai 2.x (legacy XLink API) to depthai 3.x (Pipeline 2.0)
  • Replaced YoloSpatialDetectionNetwork + custom blob with SpatialDetectionNetwork.build() using Luxonis model zoo (yolov6-nano)
  • Replaced MonoCamera/ColorCamera with Camera.build(CAM_A/B/C)
  • Removed all XLinkOut nodes — use createOutputQueue() directly on node outputs
  • Replaced dai.Device(pipeline) with dai.Pipeline() context manager + pipeline.start()
  • Deleted scripts/download_model.py (dead code — model auto-downloads from zoo on first run)

Hardware-tested on OAK-D (MyriadX)

Setup requirements:

  • USB-C ↔ USB-C cable (not USB-A → USB-C adapter — insufficient power, OAK-D draws up to 1A)
  • Windows native Python (not WSL2 — usbipd-win can't reattach fast enough after OAK-D's mid-boot USB reset)
  • $env:DEPTHAI_WATCHDOG=0 before running (disables MyriadX watchdog that kills device during model load)
  • Always unplug/replug USB-C before running if previous session didn't exit cleanly (Ctrl+C, crash). With watchdog disabled, device can't self-reset — stale XLink state causes hangs on next run.
  • If you previously used usbipd bind, run usbipd unbind --busid <id> first to restore the native Windows USB driver

Run commands:

cd C:\projects\ML-CV-Target-Tracking
$env:DEPTHAI_WATCHDOG=0; venv_win\Scripts\python main_2025.py

Proof — terminal output from hardware test (2026-04-01):

[14442C108175D2D200] [1.2] [host] [warning] Watchdog disabled! In case of unclean exit, the device needs reset or power-cycle for next run
Target ID 0: xyz=(-363mm, -350mm, 1984mm)  bbox=(130, 28, 237, 229)
Target ID 2: xyz=(322mm, -43mm, 588mm)  bbox=(417, 44, 511, 281)
Target ID 0: xyz=(-388mm, -197mm, 1928mm)  bbox=(130, 35, 227, 256)
Target ID 0: xyz=(-370mm, -200mm, 1936mm)  bbox=(130, 34, 229, 265)
Target ID 0: xyz=(-359mm, -321mm, 1948mm)  bbox=(131, 30, 234, 237)
Target ID 2: xyz=(315mm, -56mm, 574mm)  bbox=(418, 28, 512, 279)
Target ID 0: xyz=(-358mm, -337mm, 1968mm)  bbox=(131, 29, 237, 220)
Target ID 2: xyz=(313mm, -39mm, 573mm)  bbox=(417, 46, 511, 280)
  • Target ID 0: person ~2m away (z≈1968mm)
  • Target ID 2: person ~0.6m away (z≈573mm)
  • cv2 window shows live RGB feed with green bounding boxes + distance labels

Key debugging findings

Problem Root cause Fix
WSL2 X_LINK_DEVICE_NOT_FOUND usbipd reattach latency (8s) > depthai boot timeout after OAK-D USB reset Run on Windows natively
Device hangs after watchdog warning Stale XLink state from previous unclean exit (XLINK_WRITE_RESP 1) Unplug USB-C, wait 10s, replug
No LED, device unresponsive usbipd stub driver still installed (STATE=Shared) usbipd unbind --busid <id>
YoloSpatialDetectionNetwork AttributeError Removed in depthai 3.x Use SpatialDetectionNetwork.build() with model zoo
depthai 2.x full pipeline always hangs Blob upload + neural engine init causes device crash/stale state cycle Migrated to 3.x — model zoo handles blob management

Files changed

  • main_2025.py — 3.x pipeline API (dai.Pipeline(), createOutputQueue(), pipeline.isRunning())
  • modules/target_tracking/stereo_node.pyCamera.build(CAM_B/C) + requestOutput((640,400))
  • modules/target_tracking/spatial_detection_node.pySpatialDetectionNetwork.build(cam, stereo, "yolov6-nano")
  • modules/target_tracking/object_tracker_node.py — type hint update
  • config.yamlmodel_name: "yolov6-nano" replaces model_path
  • requirements-pytorch.txtdepthai>=3.5.0
  • .gitignore — added venv_win/, models/, .cache/, *.txt
  • Deleted scripts/download_model.py (dead code)

Qstrich and others added 5 commits March 16, 2026 21:19
- Introduced spatial detection settings in config.yaml, including model path and thresholds.
- Updated main_2025.py to load and utilize spatial detection and object tracking components in the pipeline.
- Added blobconverter and pyyaml as dependencies in requirements-pytorch.txt.

Co-Authored-By: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-Authored-By: Yujie Meng <yujiemengca@gmail.com>
Co-Authored-By: Yujie <yujiemengca@gmail.com>
…proach

- Brings in ultralytics + depthai into requirements.txt from stereodepth
- Drops depthai_detector.py (Pipeline 2.0 class) in favour of existing
  spatial_detection_node.py / object_tracker_node.py functional approach
- Removes tests/unit/test_pipeline_nodes.py

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-authored-by: Yujie Meng <yujiemengca@gmail.com>
Co-authored-by: Yujie <yujiemengca@gmail.com>
- Fix dead xout_rgb: switch color_cam.video → preview so bbox coords align
- Add rgb_queue read + cv2 bbox overlay and imshow in main loop
- Make model_path required in create_spatial_detection_network (no fallback)
- Drop blobconverter dep; add opencv-python to requirements-pytorch.txt
- Update config.yaml: model_path now required, documents conversion path

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-authored-by: Yujie Meng <yujiemengca@gmail.com>
Co-authored-by: Yujie <yujiemengca@gmail.com>
Replace deprecated MonoCamera/ColorCamera with Camera.build(),
YoloSpatialDetectionNetwork with SpatialDetectionNetwork.build()
using Luxonis model zoo, XLinkOut with createOutputQueue(), and
dai.Device with dai.Pipeline context manager.

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-authored-by: Yujie Meng <yujiemengca@gmail.com>
Co-authored-by: Yujie <yujiemengca@gmail.com>
tqmsh and others added 2 commits May 2, 2026 14:13
Resolves missing-function-docstring warning causing CI exit code 16.

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
…estimate

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-authored-by: Yujie Meng <yujiemengca@gmail.com>
Co-authored-by: Yujie <yujiemengca@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
tqmsh and others added 2 commits May 10, 2026 00:12
Replaces the two-bucket constant offset (27/74mm split at 750mm) with a
piecewise-linear interpolation over 4 measured anchors (0.5/1.0/1.5/2.0m)
that tapers smoothly to zero past 2.2m, where the camera's factory
calibration is trusted. The previous scheme over-corrected at 2m,
turning a -31mm raw bias into -105mm; the new one avoids this by
capturing the genuine non-monotonicity of the bias profile.

Residual bias after calibration (vs. raw):

| Distance | Raw    | Calibrated |
|----------|--------|------------|
| 0.5m     | +27.5  | -0.14      |
| 1.0m     | +75.1  | +0.12      |
| 1.5m     | +73.2  | +0.48      |
| 2.0m     | -48.7  | -3.61      |

Also commits the underlying test logs (2026-04-30, 2026-05-07) and the
calibration visualization for reproducibility.

![calibration](https://github.com/UWARG/ML-CV-Target-Tracking/raw/depthai1/documentation/accuracy/calibration.png)

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-authored-by: Yujie Meng <yujiemengca@gmail.com>
Co-authored-by: Yujie <yujiemengca@gmail.com>
CI failed black --check on the previous commit (column-aligned tuples
and an extra blank line). Reformat to satisfy the checker; no semantic
change.

Co-authored-by: Yujie Meng <192458226+Yujie-Meng@users.noreply.github.com>
Co-authored-by: Yujie Meng <yujiemengca@gmail.com>
Co-authored-by: Yujie <yujiemengca@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants