Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified GEMstack/.DS_Store
Binary file not shown.
2 changes: 1 addition & 1 deletion GEMstack/knowledge/calibration/gem_e4.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
calibration_date: "2025-02-25" # Date of calibration YYYY-MM-DD
reference: rear_axle_center # rear axle center
rear_axle_height: 0.33 # height of rear axle center above flat ground
rear_axle_height: 0.2794 # height of rear axle center above flat ground
gnss_location: [1.10,0,1.62] # meters, taken from https://github.com/hangcui1201/POLARIS_GEM_e2_Real/blob/main/vehicle_drivers/gem_gnss_control/scripts/gem_gnss_tracker_stanley_rtk.py. Note conflict with pure pursuit location?
gnss_yaw: 0.0 # radians
top_lidar: !include "gem_e4_ouster.yaml"
Expand Down
36 changes: 30 additions & 6 deletions GEMstack/offboard/calibration/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,26 @@
# Data Capture

### Data Capture Script (`capture_ouster_oak.py`)

Set up on vehicle:

1. Run roscore in a terminal
2. Run catkin_make in gem stack
3. Run source /demo_ws/devel/setup.bash
4. Run roslaunch basic_launch sensor_init.launch to get all sensors running

Script usage:

python3 capture_ouster_oak.py

1. To specify directory to save data, use --folder "path to save location" (default save folder is data)
2. To specify frequency of data capture, use --frequency put_frequency_in_hz_here (default is 2 hz)
3. To specify the what index the data should start being saved as, use --start_index desired_index_here (default is 1)


# GEMstack Offboard Calibration

This repository contains tools for offline calibration of LiDAR and camera sensors to the vehicle coordinate system on the GEM E4 platform. The calibration pipeline consists of three stages:
This section explains tools for offline calibration of LiDAR and camera sensors to the vehicle coordinate system on the GEM E4 platform. The calibration pipeline consists of three stages:

1. **LiDAR-to-Vehicle**
2. **Camera-to-Vehicle**
Expand All @@ -26,22 +46,26 @@ This repository contains tools for offline calibration of LiDAR and camera senso

**Usage**:

python3 lidar_to_vehicle.py # Edit LiDAR data paths in script
Our script assumes data is formated as: colorx.png, lidarx.npz, depthx.tif where x is some index number. Choose x depending on what data sample you want to use for calibration.

python3 lidar_to_vehicle.py --data_path "path to data folder" --index INDEX_NUM

Use --vis flag for visualizations throughout the computation process


### 2. CAMERA-to-Vehicle Calibration (`camera_to_vehicle_manual.py`)
**Method**:
1. Capture camera intrinsics using camera_info.py (ROS node)
2. Manually select 4 matching points in RGB image and LiDAR cloud
2. Manually select 4 matching points in RGB image and LiDAR cloud (can adjust variable to select more pairs)
3. Solve PnP problem to compute camera extrinsics

**Usage**:
1. Get camera intrinsics:
rosrun offboard\calibration\camera_info.py # Prints intrinsic matrix
2. Update camera_in in script with intrinsics
3. Update data paths in script
3. Our script assumes data is formated as: colorx.png, lidarx.npz, depthx.tif where x is some index number. Choose x depending on what data sample you want to use for calibration. The script also reads the lidar_to_vehicle matrix from the gem_e4_ouster.yaml file so ensure that is up to date.
4. Run calibration:
python3 camera_to_vehicle_manual.py
python3 camera_to_vehicle_manual.py --data_path "path to data folder" --index INDEX_NUM --config "path to gem_e4_ouster.yaml"


### 3. LIDAR-to-CAMERA Calibration (`lidar_to_camera.py`)
Expand All @@ -58,7 +82,7 @@ python3 lidar_to_camera.py # Ensure T_lidar_vehicle and T_camera_vehicle matri

**3D Alignment Check**:
1. Use vis() function in scripts to view calibrated LiDAR/camera clouds
2. Toggle VIS = True in lidar_to_vehicle.py for ground plane/object visualization
2. Use --vis flag when running lidar_to_vehicle.py for ground plane/object visualization
3. Use test_transforms.py to visualize lidar point cloud on top of png image. Helps verify accuracy of lidar->camera.

**Projection Validation**:
Expand Down
38 changes: 27 additions & 11 deletions GEMstack/offboard/calibration/camera_to_vehicle_manual.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,42 @@
import matplotlib.pyplot as plt
import numpy as np
from visualizer import visualizer
import argparse
import yaml

N = 8 #how many point pairs you want to select
parser = argparse.ArgumentParser(description='Select corresponding lidar, color, depth files based on index')
parser.add_argument('--data_path', type=str, required=True, help='Path to the dataset')
parser.add_argument('--index', type=int, required=True, help='Index for selecting the files')
parser.add_argument('--config', type=str, required=True, help='Path to YAML configuration file')
args = parser.parse_args()
args = parser.parse_args()

# Construct file paths based on the provided index
lidar_path = f'{args.data_path}/lidar{args.index}.npz'
rgb_path = f'{args.data_path}/color{args.index}.png'
depth_path = f'{args.data_path}/depth{args.index}.tif'

# Update Depending on Where Data Stored
rgb_path = './data/color32.png'
depth_path = './data/depth32.tif'
lidar_path = './data/lidar32.npz'
N = 8 #how many point pairs you want to select

img = cv2.imread(rgb_path, cv2.IMREAD_UNCHANGED)

lidar_points = np.load(lidar_path)['arr_0']
lidar_points = lidar_points[~np.all(lidar_points== 0, axis=1)] # remove (0,0,0)'s

rx,ry,rz = 0.006898647163954201, 0.023800082245145304, -0.025318355743942974
tx,ty,tz = 1.1, 0.037735827433173136, 1.953202227766785
rot = R.from_euler('xyz',[rx,ry,rz]).as_matrix()
lidar_ex = np.hstack([rot,[[tx],[ty],[tz]]])
lidar_ex = np.vstack([lidar_ex,[0,0,0,1]])

camera_in = np.array([

# Load transformation parameters from YAML file
with open(args.config, 'r') as yaml_file:
config = yaml.safe_load(yaml_file)

tx, ty, tz = config['position']
rot = np.array(config['rotation'])

# Construct transformation matrix
lidar_ex = np.hstack([rot, np.array([[tx], [ty], [tz]])])
lidar_ex = np.vstack([lidar_ex, [0, 0, 0, 1]])

camera_in = np.array([ # Update intrinsics if necessary
[684.83331299, 0. , 573.37109375],
[ 0. , 684.60968018, 363.70092773],
[ 0. , 0. , 1. ]
Expand Down
21 changes: 11 additions & 10 deletions GEMstack/offboard/calibration/capture_ouster_oak.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import sensor_msgs.point_cloud2 as pc2
import ctypes
import struct
import argparse


# OpenCV and cv2 bridge
import cv2
Expand Down Expand Up @@ -70,20 +72,20 @@ def save_scan(lidar_fn,color_fn,depth_fn):
dimage = dimage.astype(np.uint16)
cv2.imwrite(depth_fn,dimage)

def main(folder='data',start_index=1):
def main(folder='data',start_index=1, frequency=2):
rospy.init_node("capture_ouster_oak",disable_signals=True)
lidar_sub = rospy.Subscriber("/ouster/points", PointCloud2, lidar_callback)
camera_sub = rospy.Subscriber("/oak/rgb/image_raw", Image, camera_callback)
depth_sub = rospy.Subscriber("/oak/stereo/image_raw", Image, depth_callback)
index = 0
index = start_index
print(" Storing lidar point clouds as npz")
print(" Storing color images as png")
print(" Storing depth images as tif")
print(" Ctrl+C to quit")
while True:
if camera_image and depth_image:
cv2.imshow("result",bridge.imgmsg_to_cv2(camera_image))
time.sleep(.5)
time.sleep(1.0/frequency)
files = [
os.path.join(folder,'lidar{}.npz'.format(index)),
os.path.join(folder,'color{}.png'.format(index)),
Expand All @@ -93,10 +95,9 @@ def main(folder='data',start_index=1):

if __name__ == '__main__':
import sys
folder = 'data'
start_index = 1
if len(sys.argv) >= 2:
folder = sys.argv[1]
if len(sys.argv) >= 3:
start_index = int(sys.argv[2])
main(folder,start_index)
parser = argparse.ArgumentParser(description='Capture LiDAR and camera data.')
parser.add_argument('--folder', type=str, default='data', help='Directory to store data')
parser.add_argument('--start_index', type=int, default=1, help='Starting index for saved files')
parser.add_argument('--frequency', type=float, default=2.0, help='Capture frequency in Hz')
args = parser.parse_args()
main(args.folder, args.start_index, args.frequency)
14 changes: 11 additions & 3 deletions GEMstack/offboard/calibration/lidar_to_vehicle.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,15 @@
import pyvista as pv
import matplotlib.pyplot as plt
from visualizer import visualizer
import argparse

VIS = False # True to show visuals
VIS = True # True to show visuals
parser = argparse.ArgumentParser(description='Process LiDAR data with calibration.')
parser.add_argument('--data_path', type=str, required=True, help='Path to the dataset')
parser.add_argument('--index', type=int, required=True, help='Index for selecting the files')
parser.add_argument('--vis', action='store_true', help='Enable visualization')
args = parser.parse_args()

VIS = args.vis

#%% things to extract
tx,ty,tz,rx,ry,rz = [None] * 6
Expand All @@ -24,6 +30,9 @@ def load_scene(path):
sc = np.load(path)['arr_0']
sc = sc[~np.all(sc == 0, axis=1)] # remove (0,0,0)'s
return sc

sc1 = load_scene(f'{args.data_path}/lidar{args.index}.npz')

def crop(pc,ix=None,iy=None,iz=None):
# crop a subrectangle in a pointcloud
# usage: crop(pc, ix = (x_min,x_max), ...)
Expand Down Expand Up @@ -153,7 +162,6 @@ def pc_diff(pc0,pc1,tol=0.1):

else: #False to use only cropping
# Update depending on where data is stored
sc1 = load_scene('./data/lidar1.npz')

objects = sc1 @ rot.T + [0,0,tz]

Expand Down