Skip to content

Commit 88bcbdc

Browse files
authored
Merge pull request #6 from AccelerationConsortium/apriltag-implementation
Apriltag-implementation
2 parents cb689eb + e11e3ec commit 88bcbdc

39 files changed

Lines changed: 6515 additions & 685 deletions

.github/copilot-instructions.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,26 @@
11
## Development Practices
22

33
- Start with minimal, lean implementations focused on proof-of-concept
4+
- Avoid creating new files until asked
45
- Avoid implementing things from scratch
56
- Avoid defensive error handling for hypothetical failures
67
- Use print statements and logging sparingly, unless asked
78
- Avoid light wrappers and custom classes, unless asked
8-
- Avoid `if __name__ == "__main__"` patterns in package code
9+
- Avoid `if __name__ == "__main__"` patterns in package code, unless asked
10+
For example, rather than using:
11+
```python
12+
from math import sqrt
13+
def main():
14+
sqrt(2)
15+
16+
if __name__ == "__main__":
17+
main()
18+
```
19+
Leave it as a top-level script:
20+
```python
21+
from math import sqrt
22+
sqrt(2)
23+
```
924
- Skip unit tests unless explicitly requested
1025
- Follow patterns in CONTRIBUTING.md when present
1126
- Prefer writing Python if no language specified
@@ -33,6 +48,7 @@
3348

3449
## Change Logging
3550

51+
- Create CHANGELOG.md if it doesn't exist
3652
- Each time you generate code, note the changes in CHANGELOG.md
3753
- Follow semantic versioning guidelines
3854
- Include date and description of changes

README.md

Lines changed: 41 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ camera:
119119

120120
# AprilTag Configuration
121121
apriltag:
122-
family: "tag36h11"
122+
family: "tagStandard41h12"
123123
tag_size: 0.023 # 23mm tags
124124
```
125125
@@ -181,7 +181,7 @@ from ur_toolkit.apriltag_detection import AprilTagDetector
181181
182182
# Initialize detector
183183
detector = AprilTagDetector(
184-
tag_family='tag36h11',
184+
tag_family='tagStandard41h12',
185185
tag_size=0.023, # 23mm tags
186186
camera_calibration_file='camera_calibration/camera_calibration.yaml'
187187
)
@@ -228,6 +228,43 @@ python src/ur_toolkit/camera_calibration/capture_calibration_photos.py
228228
python src/ur_toolkit/camera_calibration/calculate_camera_intrinsics.py
229229
```
230230

231+
### 5. Hand-Eye Calibration
232+
233+
For precise camera-to-robot coordinate transformation:
234+
235+
```bash
236+
# Run automated hand-eye calibration
237+
python scripts/run_hand_eye_calibration.py --robot-ip 192.168.1.100 --tag-ids 0 --num-poses 15
238+
```
239+
240+
**What it does:**
241+
- Automatically moves robot through 15 diverse poses
242+
- Captures AprilTag detections at each pose
243+
- Solves the AX = XB calibration equation using OpenCV's robust algorithm
244+
- Saves calibration to `src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json`
245+
246+
**Parameters:**
247+
- `--robot-ip`: Your robot's IP address
248+
- `--tag-ids`: AprilTag IDs to use (default: [0])
249+
- `--num-poses`: Number of calibration poses (default: 15)
250+
- `--manual`: Use manual freedrive mode instead of automatic movement
251+
252+
**Safety Notes:**
253+
- ⚠️ Ensure workspace is clear before starting
254+
- Keep emergency stop accessible
255+
- Verify robot joint limits and collision avoidance
256+
- AprilTag must be visible from all poses
257+
258+
**Quality Assessment:**
259+
- **Excellent**: Translation error < 5mm, Rotation error < 2°
260+
- **Good**: Translation error < 10mm, Rotation error < 5°
261+
- **Poor**: Translation error > 10mm, Rotation error > 5° (recalibrate)
262+
263+
**Troubleshooting:**
264+
- **"No AprilTag detected"**: Check lighting, tag visibility, verify tag IDs
265+
- **"Pose not reachable"**: Start from more central robot position
266+
- **"Poor calibration quality"**: Increase `--num-poses`, ensure good lighting
267+
231268
## 🏗️ Development
232269

233270
The project uses a `src/` layout for better packaging and testing:
@@ -310,7 +347,7 @@ from apriltag_detection import AprilTagDetector
310347
robot = URRobotInterface('192.168.0.10')
311348
camera = PiCam(PiCamConfig.from_yaml('camera_client_config.yaml'))
312349
detector = AprilTagDetector(
313-
tag_family='tag36h11',
350+
tag_family='tagStandard41h12',
314351
tag_size=0.023,
315352
camera_calibration_file='camera_calibration/camera_calibration.yaml'
316353
)
@@ -335,7 +372,7 @@ for detection in detections:
335372
- **Pi Camera + Laptop**: Same subnet (configure in `camera_client_config.yaml`)
336373

337374
### AprilTag Settings
338-
- Default: tag36h11 family, 23mm size
375+
- Default: tagStandard41h12 family (recommended), 23mm size
339376
- Customize in detection code for your specific tags
340377
- Ensure tags are printed at exact scale for accurate pose estimation
341378
- **Robot + Pi Camera**: Different subnets OK

analyze_calibration_accuracy.py

Lines changed: 207 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,207 @@
1+
import json
2+
import numpy as np
3+
from scipy.spatial.transform import Rotation as R
4+
from pathlib import Path
5+
6+
def analyze_hand_eye_calibration_quality():
7+
"""
8+
Analyze hand-eye calibration quality using real geometric metrics
9+
instead of OpenCV's misleading residual artifacts.
10+
"""
11+
12+
# Load calibration data
13+
calib_file = Path('src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json')
14+
with open(calib_file) as f:
15+
data = json.load(f)
16+
17+
print("🔬 HAND-EYE CALIBRATION QUALITY ANALYSIS")
18+
print("=" * 60)
19+
print()
20+
21+
# Basic info
22+
info = data['calibration_info']
23+
print(f"📅 Calibration: {info['timestamp']}")
24+
print(f"📊 Poses collected: {info['poses_collected']}")
25+
print(f"✅ Successful: {info['calibration_successful']}")
26+
print()
27+
28+
# Extract calibration data
29+
T_ce = np.array(data['hand_eye_transform']['matrix'])
30+
robot_poses = [np.array(pose) for pose in data['raw_pose_data']['robot_poses']]
31+
tag_poses = [np.array(pose) for pose in data['raw_pose_data']['apriltag_poses']]
32+
33+
print("🎯 CALIBRATION RESULT ANALYSIS:")
34+
print(f" Camera position: [{T_ce[0,3]*1000:.1f}, {T_ce[1,3]*1000:.1f}, {T_ce[2,3]*1000:.1f}] mm")
35+
print(f" Camera distance from TCP: {np.linalg.norm(T_ce[:3,3])*1000:.1f} mm")
36+
37+
# Analyze rotation
38+
R_ce = T_ce[:3,:3]
39+
euler = R.from_matrix(R_ce).as_euler('xyz', degrees=True)
40+
print(f" Camera orientation: [{euler[0]:.1f}°, {euler[1]:.1f}°, {euler[2]:.1f}°]")
41+
print()
42+
43+
# REAL QUALITY METRICS
44+
print("� REAL QUALITY METRICS (NOT OpenCV artifacts):")
45+
print()
46+
47+
# 1. Pose diversity analysis
48+
print("1️⃣ POSE DIVERSITY ANALYSIS:")
49+
50+
# Translation diversity
51+
robot_translations = np.array([pose[:3,3] for pose in robot_poses])
52+
translation_range = np.ptp(robot_translations, axis=0) * 1000 # Convert to mm
53+
translation_volume = np.prod(translation_range)
54+
55+
print(f" Translation range: [{translation_range[0]:.1f}, {translation_range[1]:.1f}, {translation_range[2]:.1f}] mm")
56+
print(f" Workspace volume: {translation_volume/1000:.0f} cm³")
57+
58+
# Rotation diversity
59+
robot_rotations = [R.from_matrix(pose[:3,:3]) for pose in robot_poses]
60+
angles = []
61+
for i in range(len(robot_rotations)):
62+
for j in range(i+1, len(robot_rotations)):
63+
rel_rot = robot_rotations[i].inv() * robot_rotations[j]
64+
angle = rel_rot.magnitude()
65+
angles.append(np.degrees(angle))
66+
67+
max_rotation_diff = max(angles)
68+
avg_rotation_diff = np.mean(angles)
69+
70+
print(f" Max rotation difference: {max_rotation_diff:.1f}°")
71+
print(f" Average rotation difference: {avg_rotation_diff:.1f}°")
72+
73+
# Quality assessment for diversity
74+
trans_quality = "EXCELLENT" if translation_volume > 50000 else "GOOD" if translation_volume > 10000 else "POOR"
75+
rot_quality = "EXCELLENT" if max_rotation_diff > 45 else "GOOD" if max_rotation_diff > 20 else "POOR"
76+
77+
print(f" Translation diversity: {trans_quality}")
78+
print(f" Rotation diversity: {rot_quality}")
79+
print()
80+
81+
# 2. Consistency analysis
82+
print("2️⃣ CALIBRATION CONSISTENCY ANALYSIS:")
83+
84+
# Check consistency by validating the hand-eye equation: T_base_to_tag = T_base_to_ee * T_ee_to_cam * T_cam_to_tag
85+
consistency_errors = []
86+
87+
for i in range(len(robot_poses)):
88+
T_base_to_ee = robot_poses[i]
89+
T_cam_to_tag = tag_poses[i]
90+
91+
# Predicted tag pose in base frame using hand-eye calibration
92+
T_predicted_base_to_tag = T_base_to_ee @ T_ce @ T_cam_to_tag
93+
94+
# For comparison, we need a reference. Let's use the first pose as reference.
95+
if i == 0:
96+
T_reference_base_to_tag = T_predicted_base_to_tag
97+
continue
98+
99+
# The tag should appear at the same location in base frame for all poses
100+
T_error = T_reference_base_to_tag @ np.linalg.inv(T_predicted_base_to_tag)
101+
102+
# Extract translation and rotation errors
103+
trans_error = np.linalg.norm(T_error[:3,3]) * 1000 # mm
104+
rot_error = R.from_matrix(T_error[:3,:3]).magnitude() * 180/np.pi # degrees
105+
106+
consistency_errors.append({
107+
'pose': i+1,
108+
'translation_error_mm': trans_error,
109+
'rotation_error_deg': rot_error
110+
})
111+
112+
if consistency_errors:
113+
avg_trans_error = np.mean([e['translation_error_mm'] for e in consistency_errors])
114+
avg_rot_error = np.mean([e['rotation_error_deg'] for e in consistency_errors])
115+
max_trans_error = max([e['translation_error_mm'] for e in consistency_errors])
116+
max_rot_error = max([e['rotation_error_deg'] for e in consistency_errors])
117+
118+
print(f" Average consistency error: {avg_trans_error:.2f}mm, {avg_rot_error:.2f}°")
119+
print(f" Maximum consistency error: {max_trans_error:.2f}mm, {max_rot_error:.2f}°")
120+
121+
# Quality assessment for consistency
122+
consistency_quality = "EXCELLENT" if avg_trans_error < 5 and avg_rot_error < 2 else \
123+
"GOOD" if avg_trans_error < 15 and avg_rot_error < 5 else \
124+
"POOR"
125+
126+
print(f" Consistency quality: {consistency_quality}")
127+
128+
# Show worst poses
129+
worst_trans = max(consistency_errors, key=lambda x: x['translation_error_mm'])
130+
worst_rot = max(consistency_errors, key=lambda x: x['rotation_error_deg'])
131+
print(f" Worst translation error: Pose {worst_trans['pose']} ({worst_trans['translation_error_mm']:.2f}mm)")
132+
print(f" Worst rotation error: Pose {worst_rot['pose']} ({worst_rot['rotation_error_deg']:.2f}°)")
133+
134+
print()
135+
136+
# 3. Physical reasonableness
137+
print("3️⃣ PHYSICAL REASONABLENESS CHECK:")
138+
139+
camera_distance = np.linalg.norm(T_ce[:3,3]) * 1000
140+
camera_pos = T_ce[:3,3] * 1000
141+
142+
# Check if camera position makes physical sense
143+
distance_reasonable = 50 < camera_distance < 300 # 5cm to 30cm from TCP
144+
145+
# Check if camera is pointing roughly forward (positive Z in camera frame should align with some robot axis)
146+
camera_z_direction = T_ce[:3,2] # Camera Z axis (forward direction)
147+
camera_pointing_forward = camera_z_direction[0] > 0.5 # Roughly pointing in robot X direction
148+
149+
print(f" Camera distance from TCP: {camera_distance:.1f}mm {'✅' if distance_reasonable else '⚠️'}")
150+
print(f" Camera pointing direction: {'Forward ✅' if camera_pointing_forward else 'Other orientation ⚠️'}")
151+
print(f" Physical setup: {'Reasonable ✅' if distance_reasonable else 'Check mounting ⚠️'}")
152+
print()
153+
154+
# 4. Overall quality assessment
155+
print("🏆 OVERALL QUALITY ASSESSMENT:")
156+
157+
quality_scores = []
158+
if 'trans_quality' in locals():
159+
quality_scores.append(trans_quality)
160+
if 'rot_quality' in locals():
161+
quality_scores.append(rot_quality)
162+
if 'consistency_quality' in locals():
163+
quality_scores.append(consistency_quality)
164+
165+
excellent_count = quality_scores.count('EXCELLENT')
166+
good_count = quality_scores.count('GOOD')
167+
poor_count = quality_scores.count('POOR')
168+
169+
if excellent_count >= 2:
170+
overall_quality = "EXCELLENT"
171+
overall_emoji = "🥇"
172+
elif good_count >= 2 and poor_count == 0:
173+
overall_quality = "GOOD"
174+
overall_emoji = "🥈"
175+
elif poor_count <= 1:
176+
overall_quality = "ACCEPTABLE"
177+
overall_emoji = "🥉"
178+
else:
179+
overall_quality = "POOR"
180+
overall_emoji = "❌"
181+
182+
print(f" {overall_emoji} Overall calibration quality: {overall_quality}")
183+
print()
184+
185+
# 5. Recommendations
186+
print("💡 RECOMMENDATIONS:")
187+
188+
if overall_quality == "EXCELLENT":
189+
print(" ✅ Calibration is excellent - ready for precise visual servoing")
190+
print(" ✅ Expected accuracy: 1-3mm translation, 0.5-1.5° rotation")
191+
elif overall_quality == "GOOD":
192+
print(" ✅ Calibration is good - suitable for most visual servoing tasks")
193+
print(" ✅ Expected accuracy: 2-5mm translation, 1-3° rotation")
194+
elif overall_quality == "ACCEPTABLE":
195+
print(" ⚠️ Calibration is acceptable but could be improved")
196+
print(" 📝 Consider recalibrating with more diverse poses")
197+
print(" 📝 Expected accuracy: 3-8mm translation, 2-5° rotation")
198+
else:
199+
print(" ❌ Calibration quality is poor - recalibration recommended")
200+
print(" 📝 Increase pose diversity, especially rotations")
201+
print(" 📝 Check physical setup and AprilTag mounting")
202+
203+
print()
204+
print("📋 NOTE: These are REAL quality metrics, not OpenCV's misleading 500+mm 'residuals'")
205+
206+
if __name__ == "__main__":
207+
analyze_hand_eye_calibration_quality()

compare_calibrations.py

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
import json
2+
import numpy as np
3+
4+
# Load current calibration
5+
with open('src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json') as f:
6+
current = json.load(f)
7+
8+
print("=== HAND-EYE CALIBRATION COMPARISON ===\n")
9+
10+
# Previous calibration data (from conversation history)
11+
prev_poses = 8
12+
prev_translation = [-113.73, 11.49, 28.88] # mm
13+
prev_trans_error = 753.38 # mm
14+
prev_rot_error = 119.06 # degrees
15+
16+
# Current calibration data
17+
curr_poses = current['calibration_info']['poses_collected']
18+
curr_trans = current['hand_eye_transform']['translation_mm']
19+
curr_translation = [curr_trans['x'], curr_trans['y'], curr_trans['z']]
20+
curr_accuracy = current['calibration_accuracy']['residuals']
21+
curr_trans_error = curr_accuracy['average_translation_error_mm']
22+
curr_rot_error = curr_accuracy['average_rotation_error_deg']
23+
24+
print("📅 PREVIOUS CALIBRATION (Sep 26):")
25+
print(f" Poses collected: {prev_poses}")
26+
print(f" Translation: [{prev_translation[0]:.1f}, {prev_translation[1]:.1f}, {prev_translation[2]:.1f}] mm")
27+
print(f" Average errors: {prev_trans_error:.1f}mm translation, {prev_rot_error:.1f}° rotation")
28+
print()
29+
30+
print("📅 CURRENT CALIBRATION (Sep 29):")
31+
print(f" Poses collected: {curr_poses}")
32+
print(f" Translation: [{curr_translation[0]:.1f}, {curr_translation[1]:.1f}, {curr_translation[2]:.1f}] mm")
33+
print(f" Average errors: {curr_trans_error:.1f}mm translation, {curr_rot_error:.1f}° rotation")
34+
print()
35+
36+
# Calculate differences
37+
translation_shift = np.array(curr_translation) - np.array(prev_translation)
38+
total_shift = np.linalg.norm(translation_shift)
39+
error_change_trans = curr_trans_error - prev_trans_error
40+
error_change_rot = curr_rot_error - prev_rot_error
41+
42+
print("🔍 COMPARISON ANALYSIS:")
43+
print(f" Translation shift: [{translation_shift[0]:.1f}, {translation_shift[1]:.1f}, {translation_shift[2]:.1f}] mm")
44+
print(f" Total position shift: {total_shift:.1f} mm")
45+
print(f" Error change: {error_change_trans:+.1f}mm translation, {error_change_rot:+.1f}° rotation")
46+
print()
47+
48+
print("📊 QUALITY ASSESSMENT:")
49+
print(f" Data quantity: {'✅ Better' if curr_poses > prev_poses else '❌ Same/Worse'} ({curr_poses} vs {prev_poses} poses)")
50+
print(f" Position stability: {'✅ Good' if total_shift < 50 else '⚠️ Significant shift'} ({total_shift:.1f}mm shift)")
51+
print(f" Error consistency: {'✅ Consistent' if abs(error_change_trans) < 100 else '⚠️ Variable'} ({error_change_trans:+.1f}mm change)")
52+
53+
# Interpretation
54+
print("\n🎯 INTERPRETATION:")
55+
if total_shift < 30:
56+
print(" Position difference is small - calibrations are reasonably consistent")
57+
elif total_shift < 60:
58+
print(" Moderate position difference - some variation but acceptable")
59+
else:
60+
print(" Large position difference - indicates calibration variability")
61+
62+
if abs(error_change_trans) < 50:
63+
print(" Error levels are similar - both calibrations have comparable internal consistency")
64+
else:
65+
print(" Different error levels - may indicate different pose diversity or quality")
66+
67+
print(f"\n💡 RECOMMENDATION:")
68+
if curr_poses > prev_poses and total_shift < 50:
69+
print(" Current calibration is better: More poses + consistent position")
70+
elif total_shift > 60:
71+
print(" Consider recalibrating with more rotational diversity for stability")
72+
else:
73+
print(" Both calibrations are reasonable - use current for most recent data")

0 commit comments

Comments
 (0)