Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions docs/carla_yolo_detection/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# 自动驾驶感知与控制系统研究 (基于 CARLA)

## 1. 项目简介
本项目旨在通过 Python 脚本与 CARLA 仿真环境进行深度交互,搭建一个基础的自动驾驶测试环境。项目将探索如何利用深度学习视觉算法和虚拟传感器数据,实现对仿真世界中动态物体的检测、环境感知以及基础的车辆控制。

## 2. 选题说明
* **参考开源项目:** [kamilkolo22/AutonomousVehicle](https://github.com/kamilkolo22/AutonomousVehicle)
* **重构思路:** 原项目部分模块对 Windows 系统兼容性较差,本项目提取其“视觉识别 + 传感器交互”的核心架构,在 Windows 环境下使用纯 Python 配合 PyTorch 进行完全重构,以确保跨平台的易用性和代码的可读性。

## 3. 开发运行环境
* **操作系统:** Windows 10/11
* **仿真平台:** HUTB CARLA_Mujoco_2.2.1
* **编程语言:** Python 3.8
* **核心框架:** PyTorch (支持 CUDA 加速), OpenCV
* **开发工具:** Visual Studio Code / Anaconda

## 4. 模块结构与入口
* 本模块的所有核心代码存放于 `src/carla_yolo_detection` 目录下。
* 模块的主程序入口为 `main.py`。

# [第2次提交] carla_yolo_detection: 实时感知与背景车流系统

## 1. 模块功能
本模块实现了自动驾驶视觉感知的基础闭环:
- **实时物体检测**: 集成 YOLOv5s 模型,实时识别 CARLA 环境中的车辆与行人。
- **背景交通流生成**: 利用 Traffic Manager 自动随机部署 30 辆背景车,模拟动态路况。
- **异步推理架构**: 优化了图像处理流程,通过回调截取最新帧,避免了深度学习推理导致的画面卡死,并支持实时 FPS 显示。
- **安全监听**: 挂载碰撞传感器 (Collision Sensor),实时在终端发出碰撞预警。

## 2. 运行指南

### 步骤 1:启动 CARLA 模拟器
运行 `CarlaUE4.exe`,等待地图加载完毕。

### 步骤 2:配置 Python 环境
> **⚠️ 核心避坑**:本项目基于 HUTB CARLA_Mujoco_2.2.1,必须先手动安装模拟器自带的 `carla` 库,不能直接 pip install carla。

请在 Anaconda 环境(推荐 Python 3.8)中,**依次执行**以下命令:

1. **安装底层 CARLA API** (请将路径替换为你电脑上实际的 `.whl` 路径):

```bash
pip install D:\hutb\hutb_car_mujoco_2.2.1\PythonAPI\carla\dist\hutb-2.9.16-cp38-cp38-win_amd64.whl

```

2. 安装常规依赖库:
```pip install -r src/carla_yolo_detection/requirements.txt -i [https://pypi.tuna.tsinghua.edu.cn/simple](https://pypi.tuna.tsinghua.edu.cn/simple)```


3. **(可选) 开启 GPU 显卡加速**:
如果你拥有 NVIDIA 显卡并希望获得 30+ 的流畅 FPS,请**务必额外执行**此命令覆盖安装 CUDA 版 Torch:

```bash
pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu118](https://download.pytorch.org/whl/cu118)


步骤 3:运行程序
请在项目根目录下执行核心脚本:
```python src/carla_yolo_detection/main.py```
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ copyright: OpenHUTB 版权所有 © {year}

nav:
- 首页: 'index.md'
- 自动驾驶感知系统: 'carla_yolo_detection/README.md'


# - mdx_math 用于行内公式显示
Expand Down
126 changes: 126 additions & 0 deletions src/carla_yolo_detection/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
import carla
import random
import time
import numpy as np
import cv2
import torch
import warnings

warnings.filterwarnings("ignore")

print("正在加载 YOLOv5 神经网络模型...")
# 明确告诉模型,如果有GPU就用GPU,没有就用CPU
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).to(device)
print(f"模型加载完毕!当前使用的计算设备是: {device.upper()}")

# 全局变量,只保存“最新的一帧”
latest_image = None

def camera_callback(image):
"""摄像头只负责把最新画面存起来,不进行任何复杂计算"""
global latest_image
latest_image = image

def spawn_traffic(client, world, number_of_vehicles=30):
bp_lib = world.get_blueprint_library()
tm = client.get_trafficmanager(8000)
tm.set_global_distance_to_leading_vehicle(2.5)
tm.set_synchronous_mode(False)

vehicle_bps = bp_lib.filter('vehicle.*')
spawn_points = world.get_map().get_spawn_points()
random.shuffle(spawn_points)

temp_actors = []
for i in range(min(number_of_vehicles, len(spawn_points))):
bp = random.choice(vehicle_bps)
vehicle = world.try_spawn_actor(bp, spawn_points[i])
if vehicle:
vehicle.set_autopilot(True, tm.get_port())
temp_actors.append(vehicle)
return temp_actors

def collision_handler(event):
print(f"\n[💥碰撞预警] 发生碰撞! 撞到了: {event.other_actor.type_id}")

def main():
global latest_image
actor_list = []
try:
client = carla.Client('localhost', 2000)
client.set_timeout(10.0)
world = client.get_world()
bp_lib = world.get_blueprint_library()

# 生成自车
vehicle_bp = bp_lib.filter('vehicle.tesla.model3')[0]
spawn_point = random.choice(world.get_map().get_spawn_points())
vehicle = world.spawn_actor(vehicle_bp, spawn_point)
actor_list.append(vehicle)
vehicle.set_autopilot(True)

# 生成背景车流
traffic_actors = spawn_traffic(client, world, 30)
actor_list.extend(traffic_actors)

# 挂载摄像头 (降低分辨率,640x480对YOLOv5s刚刚好,能大幅提升速度)
cam_bp = bp_lib.find('sensor.camera.rgb')
cam_bp.set_attribute('image_size_x', '640')
cam_bp.set_attribute('image_size_y', '480')
cam_bp.set_attribute('fov', '90')
camera = world.spawn_actor(cam_bp, carla.Transform(carla.Location(x=1.5, z=2.4)), attach_to=vehicle)
actor_list.append(camera)

# 绑定轻量级回调
camera.listen(camera_callback)

# 挂载碰撞传感器
col_bp = bp_lib.find('sensor.other.collision')
collision_sensor = world.spawn_actor(col_bp, carla.Transform(), attach_to=vehicle)
actor_list.append(collision_sensor)
collision_sensor.listen(collision_handler)

print("\n✅ 系统启动!按 Ctrl+C 退出...")

# 主循环:专门用来做深度学习推理
while True:
if latest_image is not None:
# 记录开始时间,计算FPS
start_time = time.time()

# 提取当前最新帧,并立刻清空,等待下一帧
img_data = latest_image
latest_image = None

# 图像格式转换
i = np.array(img_data.raw_data)
i2 = i.reshape((img_data.height, img_data.width, 4))
img_bgr = i2[:, :, :3]
img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)

# YOLO推理
results = model(img_rgb)
img_with_boxes = results.render()[0]
img_display = cv2.cvtColor(img_with_boxes, cv2.COLOR_RGB2BGR)

# 计算并显示FPS
fps = 1.0 / (time.time() - start_time)
cv2.putText(img_display, f"FPS: {fps:.1f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)

cv2.imshow("Optimized Detection", img_display)
cv2.waitKey(1)
else:
# 如果没有新照片,稍微睡一下,防止CPU占用100%
time.sleep(0.001)

except KeyboardInterrupt:
print("\n正在关闭系统...")
finally:
for actor in actor_list:
if actor is not None:
actor.destroy()
cv2.destroyAllWindows()

if __name__ == '__main__':
main()
12 changes: 12 additions & 0 deletions src/carla_yolo_detection/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
torch>=1.8.0
torchvision
torchaudio
numpy>=1.18.5
opencv-python>=4.1.1
ultralytics>=8.0.0
seaborn>=0.11.0
pandas>=1.1.4
tqdm
matplotlib
scipy
gitpython>=3.1.30