Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
51a8150
correct tx,ty
rjsun06 Feb 19, 2025
3891eed
updating tx in lidar to camera
AadarshHegde123 Feb 20, 2025
63432ef
lidar->vehecle matrix. and update manual pointpicking
rjsun06 Feb 20, 2025
fd501fa
Merge branch 'EstCal_A' of https://github.com/krishauser/GEMstack int…
rjsun06 Feb 20, 2025
1b44351
camera to vehicle by point selecting.
rjsun06 Feb 20, 2025
9c91ffc
dumb typo
rjsun06 Feb 20, 2025
11907a3
dumb typo
rjsun06 Feb 20, 2025
fa73970
adding testing script and lidar to cam script
AadarshHegde123 Feb 21, 2025
5e04c43
Add camera transform to yaml file
michalj1 Feb 24, 2025
adc4dd5
Added README.md
whizkid42 Feb 24, 2025
9d47105
Updated README.md
whizkid42 Feb 25, 2025
66bc2bd
improved camera->vehicle
rjsun06 Feb 25, 2025
f09bde8
removing unnecessary file
AadarshHegde123 Feb 26, 2025
af7b5ec
Rename make_gem_e4_ouster_v2.py to lidar_to_vehicle.py
AadarshHegde123 Feb 26, 2025
c247a43
Update README.md
AadarshHegde123 Feb 26, 2025
28d728f
Update README.md
AadarshHegde123 Feb 26, 2025
81c5fb5
demo scripts
rjsun06 Feb 26, 2025
70a8173
Merge branch 'EstCal_A' of https://github.com/krishauser/GEMstack int…
rjsun06 Feb 26, 2025
5e70824
Updated date
whizkid42 Feb 26, 2025
3b041a8
Merge remote-tracking branch 'origin/EstCal_A' into EstCal_A
whizkid42 Feb 26, 2025
fb2bf76
fixing yaml matrix
AadarshHegde123 Feb 26, 2025
2e5bea5
code restructure + clean up
AadarshHegde123 Feb 26, 2025
799c566
removing conflict
AadarshHegde123 Feb 26, 2025
97f76c5
clean-up tested + working
AadarshHegde123 Feb 26, 2025
16cd934
readme update
AadarshHegde123 Feb 26, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file removed .DS_Store
Binary file not shown.
Binary file modified GEMstack/.DS_Store
Binary file not shown.
2 changes: 1 addition & 1 deletion GEMstack/knowledge/calibration/gem_e4.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
calibration_date: "2024-03-05" # Date of calibration YYYY-MM-DD
calibration_date: "2025-02-16" # Date of calibration YYYY-MM-DD
reference: rear_axle_center # rear axle center
rear_axle_height: 0.33 # height of rear axle center above flat ground
gnss_location: [1.10,0.1.62] # meters, taken from https://github.com/hangcui1201/POLARIS_GEM_e2_Real/blob/main/vehicle_drivers/gem_gnss_control/scripts/gem_gnss_tracker_stanley_rtk.py. Note conflict with pure pursuit location?
Expand Down
6 changes: 4 additions & 2 deletions GEMstack/knowledge/calibration/gem_e4_oak.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
reference: rear_axle_center # rear axle center
rotation: [[0,0,1],[-1,0,0],[0,-1,0]] # rotation matrix mapping z to forward, x to left, y to down, guesstimated
center_position: [1.78,0,1.58] # meters, center camera, guesstimated
rotation: [[-0.025131, -0.0304479, 0.99922038],
[-0.99892897, 0.0396095, -0.0239167],
[-0.0388504, -0.99875123, -0.03141071]] # rotation matrix mapping z to forward, x to left, y to down, guesstimated
center_position: [1.78251567,0.18649424,1.5399846] # meters, center camera, guesstimated
6 changes: 4 additions & 2 deletions GEMstack/knowledge/calibration/gem_e4_ouster.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
reference: rear_axle_center # rear axle center
position: [1.10,0,2.03] # meters, calibrated by Hang's watchful eye
rotation: [[1,0,0],[0,1,0],[0,0,1]] #rotation matrix mapping lidar frame to vehicle frame
position: [1.1, 0.03773044170906172, 1.9525244316515322]
rotation: [[ 0.99941328, 0.02547416, 0.02289458],
[-0.02530855, 0.99965159, -0.00749488],
[-0.02307753, 0.00691106, 0.99970979]]
86 changes: 86 additions & 0 deletions GEMstack/offboard/calibration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# GEMstack Offboard Calibration

This repository contains tools for offline calibration of LiDAR and camera sensors to the vehicle coordinate system on the GEM E4 platform. The calibration pipeline consists of three stages:

1. **LiDAR-to-Vehicle**
2. **Camera-to-Vehicle**
3. **LiDAR-to-Camera**

---


## Calibration Pipeline

### 1. LiDAR-to-Vehicle Calibration (`lidar_to_vehicle.py`)
**Method**:
- **Ground Plane Detection**:
1. Crop LiDAR points near ground (Z ∈ [-3, -2])
2. Use RANSAC to fit a plane to ground points
3. Compute vehicle height (`tz`) using plane equation and axel height offset (0.2794m)
4. Derive pitch (`rx`) and roll (`ry`) from plane normal vector

- **Frontal Object Alignment**:
1. Crop LiDAR data to frontal area (X ∈ [0,20], Y ∈ [-1,1])
2. Fit line to object points for yaw (`rz`) estimation
3. Compute translation (`tx`, `ty`) using vehicle dimensions

**Usage**:

python3 lidar_to_vehicle.py # Edit LiDAR data paths in script


### 2. CAMERA-to-Vehicle Calibration (`camera_to_vehicle_manual.py`)
**Method**:
1. Capture camera intrinsics using camera_info.py (ROS node)
2. Manually select 4 matching points in RGB image and LiDAR cloud
3. Solve PnP problem to compute camera extrinsics

**Usage**:
1. Get camera intrinsics:
rosrun offboard\calibration\camera_info.py # Prints intrinsic matrix
2. Update camera_in in script with intrinsics
3. Update data paths in script
4. Run calibration:
python3 camera_to_vehicle_manual.py


### 3. LIDAR-to-CAMERA Calibration (`lidar_to_camera.py`)
**Method**:
1. Invert Camera-to-Vehicle matrix
2. Multiply with LiDAR-to-Vehicle matrix

**Usage**:

python3 lidar_to_camera.py # Ensure T_lidar_vehicle and T_camera_vehicle matrices are updated


### Visualization Tools

**3D Alignment Check**:
1. Use vis() function in scripts to view calibrated LiDAR/camera clouds
2. Toggle VIS = True in lidar_to_vehicle.py for ground plane/object visualization
3. Use test_transforms.py to visualize lidar point cloud on top of png image. Helps verify accuracy of lidar->camera.

**Projection Validation**:
1. RGB image overlaid with transformed LiDAR points (Z-buffered)
2. Frontal view comparison of camera and LiDAR data






### Assumption and Notes

1. The sensor data should be time-aligned.
2. The flat surface should be detectable in the Lidar scan
3. Magic Numbers:
Axel height (0.2794m) from physical measurements
Frontal crop areas based on vehicle dimensions
4. Validation: Use pyvista visualizations to verify alignment






Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,22 @@
from scipy.spatial.transform import Rotation as R
import matplotlib.pyplot as plt
import numpy as np
from visualizer import visualizer

rgb_path = '/mount/wp/GEMstack/data/color21.png'
depth_path = '/mount/wp/GEMstack/data/depth21.tif'
lidar_path = '/mount/wp/GEMstack/data/lidar21.npz'
N = 8 #how many point pairs you want to select

# Update Depending on Where Data Stored
rgb_path = './data/color32.png'
depth_path = './data/depth32.tif'
lidar_path = './data/lidar32.npz'

img = cv2.imread(rgb_path, cv2.IMREAD_UNCHANGED)

lidar_points = np.load(lidar_path)['arr_0']
lidar_points = lidar_points[~np.all(lidar_points== 0, axis=1)] # remove (0,0,0)'s

rx,ry,rz = 0.006898647163954201, 0.023800082245145304, -0.025318355743942974
tx,ty,tz = -1.1, 0.037735827433173136, 1.953202227766785
tx,ty,tz = 1.1, 0.037735827433173136, 1.953202227766785
rot = R.from_euler('xyz',[rx,ry,rz]).as_matrix()
lidar_ex = np.hstack([rot,[[tx],[ty],[tz]]])
lidar_ex = np.vstack([lidar_ex,[0,0,0,1]])
Expand All @@ -25,6 +29,7 @@
[ 0. , 0. , 1. ]
], dtype=np.float32)


#%%
# blurred = cv2.GaussianBlur(img, (5, 5), 0)
# edges = cv2.Canny(blurred, threshold1=50, threshold2=150)
Expand All @@ -33,43 +38,11 @@

#%%
import pyvista as pv
def vis(title='', ratio=1):
def vis(title='', ratio=1,notebook=False):
print(title)
pv.set_jupyter_backend('client')
plotter = pv.Plotter(notebook=True)
plotter.show_axes()
class foo:
def set_cam(self,pos=(-20*ratio,0,20*ratio),foc=(0,0,0)):
plotter.camera.position = pos
plotter.camera.focal_point = foc
return self
def add_pc(self,pc,ratio=ratio,**kargs):
plotter.add_mesh(
pv.PolyData(pc*ratio),
render_points_as_spheres=True,
point_size=2,
**kargs)
return self
def add_line(self,p1,p2,ratio=ratio,**kargs):
plotter.add_mesh(
pv.Line(p1*ratio,p2*ratio),
**kargs,
line_width=1)
return self
def add_box(self,bound,trans,ratio=ratio):
l,w,h = map(lambda x:x*ratio,bound)
box = pv.Box(bounds=(-l/2,l/2,-w/2,w/2,-h/2,h/2))
box = box.translate(list(map(lambda x:x*ratio,trans)))
plotter.add_mesh(box, color='yellow')
return self
def show(self):
plotter.show()
return self
def close(self):
plotter.close()
return None

return foo().set_cam()
return visualizer().set_cam()
def crop(pc,ix=None,iy=None,iz=None):
# crop a subrectangle in a pointcloud
# usage: crop(pc, ix = (x_min,x_max), ...)
Expand All @@ -85,11 +58,11 @@ def crop(pc,ix=None,iy=None,iz=None):


lidar_post = np.pad(lidar_points,((0,0),(0,1)),constant_values=1) @ lidar_ex.T[:,:3]
lidar_post = crop(lidar_post,ix=(0,8),iy=(-5,5))
vis().add_pc(lidar_post).show()
lidar_post = crop(lidar_post,ix=(0,10),iy=(-5,5))
# vis(notebook=False).add_pc(lidar_post).show()

#%%
def pick_4_img(img):
def pick_n_img(img,n=4):
corners = [] # Reset the corners list
def click_event(event, x, y, flags, param):
if event == cv2.EVENT_LBUTTONDOWN:
Expand All @@ -101,23 +74,23 @@ def click_event(event, x, y, flags, param):
cv2.setMouseCallback('Image', click_event, img)

while True:
if len(corners) == 4:
if len(corners) == n:
break
if cv2.waitKey(1) & 0xFF == ord('q'):
return None

cv2.destroyAllWindows()

return corners
cpoints = np.array(pick_4_img(img)).astype(float)
cpoints = np.array(pick_n_img(img,N)).astype(float)
print(cpoints)

#%%
def pick_4_pc(point_cloud):
def pick_n_pc(point_cloud,n=4):
points = []
def cb(pt,*args):
points.append(pt)
while len(points)!=4:
while len(points)!=n:
points = []
cloud = pv.PolyData(point_cloud)
plotter = pv.Plotter(notebook=False)
Expand All @@ -128,7 +101,7 @@ def cb(pt,*args):
plotter.show()
return points

lpoints = np.array(pick_4_pc(lidar_post))
lpoints = np.array(pick_n_pc(lidar_post,N))
print(lpoints)
# %%
success, rvec, tvec = cv2.solvePnP(lpoints, cpoints, camera_in, None)
Expand Down Expand Up @@ -182,10 +155,9 @@ def depth_to_points(depth_img: np.ndarray, intrinsics: np.ndarray):
depth_img = cv2.imread(depth_path, cv2.IMREAD_UNCHANGED)
camera_points = depth_to_points(depth_img, camera_in)

v=vis()
v.add_pc(np.pad(lidar_points,((0,0),(0,1)),constant_values=1)@lidar_ex.T@T.T[:,:3],color='blue')
v.add_pc(camera_points,color='red')
v.show()

#%%
print(np.vstack([(lidar_ex.T@T.T[:,:3]).T,[0,0,0,1]]))
v2c = T
print('vehicle->camera:',v2c)
c2v = np.linalg.inv(v2c)
print('camera->vehicle:',c2v)

Loading