-
Notifications
You must be signed in to change notification settings - Fork 21
feat(centerpoint): integrate CenterPoint into unified deployment pipeline #161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(centerpoint): integrate CenterPoint into unified deployment pipeline #161
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR integrates CenterPoint into a unified deployment pipeline by creating a new project bundle under deployment/projects/centerpoint/ and removing legacy deployment scripts. The integration enables PyTorch, ONNX, and TensorRT export/evaluation workflows through a componentized multi-file ONNX approach.
Key Changes:
- Centralizes CenterPoint deployment under a unified CLI (
deployment.cli.main centerpoint) - Implements multi-file ONNX export (voxel encoder + backbone/head components) with per-component TensorRT engine conversion
- Removes legacy deployment entrypoints (
projects/CenterPoint/scripts/deploy.py,projects/CenterPoint/runners/deployment_runner.py,projects/CenterPoint/models/detectors/centerpoint_onnx.py)
Reviewed changes
Copilot reviewed 26 out of 26 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
projects/CenterPoint/scripts/deploy.py |
Removed legacy deployment script |
projects/CenterPoint/runners/deployment_runner.py |
Removed legacy runner implementation |
projects/CenterPoint/models/detectors/centerpoint_onnx.py |
Removed legacy ONNX detector variant |
projects/CenterPoint/models/__init__.py |
Removed ONNX model imports/exports |
projects/CenterPoint/README.md |
Updated with unified deployment CLI usage |
projects/CenterPoint/Dockerfile |
Added deployment dependencies (onnxruntime-gpu, tensorrt-cu12) |
deployment/projects/centerpoint/runner.py |
New unified deployment runner |
deployment/projects/centerpoint/pipelines/*.py |
Backend-specific inference pipelines (PyTorch/ONNX/TensorRT) |
deployment/projects/centerpoint/export/*.py |
ONNX/TensorRT export pipelines with component extraction |
deployment/projects/centerpoint/onnx_models/*.py |
Relocated ONNX model variants with import path fixes |
deployment/projects/centerpoint/*.py |
Core deployment components (entrypoint, evaluator, data_loader, model_loader) |
deployment/projects/centerpoint/config/deploy_config.py |
Centralized deployment configuration |
deployment/projects/centerpoint/cli.py |
CLI flag registration (--rot-y-axis-reference) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
deployment/projects/centerpoint/onnx_models/pillar_encoder_onnx.py
Outdated
Show resolved
Hide resolved
deployment/projects/centerpoint/onnx_models/centerpoint_head_onnx.py
Outdated
Show resolved
Hide resolved
68d1700 to
7ac5b8b
Compare
0c61463 to
a9cd830
Compare
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…x.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…nnx.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
a9cd830 to
97d977b
Compare
| model_inputs: Tuple[TensorRTModelInputConfig, ...] = tuple( | ||
| TensorRTModelInputConfig.from_dict(item) for item in model_inputs_raw | ||
| ) | ||
| def from_dict(cls, config_dict: Mapping[str, Any]) -> "TensorRTConfig": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use from __future__ import annotations for self-reference without ""
| max_workspace_size=config_dict.get("max_workspace_size", DEFAULT_WORKSPACE_SIZE), | ||
| ) | ||
|
|
||
| def get_precision_policy(self) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding @property is cleaner, and remove get_
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, fixed in another PR in b75687c
| self.export_config = ExportConfig.from_dict(deploy_cfg.get("export", {})) | ||
| self.runtime_config = RuntimeConfig.from_dict(deploy_cfg.get("runtime_io", {})) | ||
| self.backend_config = BackendConfig.from_dict(deploy_cfg.get("backend_config", {})) | ||
| self.tensorrt_config = TensorRTConfig.from_dict(deploy_cfg.get("tensorrt_config", {}) or {}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do TensorRTConfig.from_dict return {}? Is so, we dont need or {}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, fixed in 152ff54
Signed-off-by: vividf <yihsiang.fang@tier4.jp>
|
Close this PR and seperate them to two new PRs. One for refactor and one for integrate centerpoint. |
Summary
Change points
New CenterPoint deployment project
Project registration + CLI flag: deployment/projects/centerpoint/cli.py (--rot-y-axis-reference)
Wiring/entrypoint: deployment/projects/centerpoint/entrypoint.py
Runtime config: deployment/projects/centerpoint/config/deploy_config.py
Dataloader / evaluator / model building:
Backend pipelines (staged inference aligned across backends):
Export pipelines
ONNX model variants moved under deployment:
deployment/projects/centerpoint/onnx_models
Docs / packaging
Adds projects/CenterPoint/Dockerfile for deployment dependencies (onnxruntime-gpu, onnxsim, pycuda, tensorrt-cu12).
Updates projects/CenterPoint/README.md with unified deployment usage example.
How to run
Exported ONNX (Same)
Voxel Encoder

Backbone Head

Evaluation result (Same for Deployment and Test)
Test with test.py
Frame:
Total Num: 19
Skipped Frames: []
Skipped Frames Count: 0
Ground Truth Num: 650
mAP: 0.5860, mAPH: 0.5621 (Center Distance BEV)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
mAP: 0.6076, mAPH: 0.5831 (Plane Distance)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
Test with Deployment pipeline
PYTORCH Results:
mAP: 0.5860, mAPH: 0.5621 (Center Distance BEV)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
mAP: 0.6076, mAPH: 0.5831 (Plane Distance)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
ONNX Results:
mAP: 0.5860, mAPH: 0.5619 (Center Distance BEV)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
mAP: 0.6072, mAPH: 0.5830 (Plane Distance)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
TENSORRT Results:
mAP: 0.5866, mAPH: 0.5629 (Center Distance BEV)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary:
mAP: 0.6097, mAPH: 0.5846 (Plane Distance)
Label: car
Label: truck
Label: bus
Label: bicycle
Label: pedestrian
Summary: