Skip to content

Simulates NVIDIA DRIVE AGX Thor parallelism on a GTX 1050 Ti using a MATLAB/Simulink MPC for longitudinal control, generating CUDA code with GPU Coder to compare GPU vs CPU performance.

Notifications You must be signed in to change notification settings

JDazogbo/NVIDIA-Drive-Emulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NVIDIA Drive Emulation

This project emulates the computation and control algorithm implementation similar to those deployed on DRIVE AGX Thor, NVIDIA's flagship GPU platform for autonomous driving. By utilizing a consumer-grade NVIDIA GTX 1050 Ti GPU, this project demonstrates the feasibility of running advanced autonomous driving algorithms through Processor-in-the-Loop (PiL) simulation.

Hierarchical Architecture for Planning and Control

Figure 1: Processor in the Loop Emulation of the GPU Deployed Model Predictive Control.

Technical Details

  • Control Algorithm: Implementation of Model Predictive Control (MPC) for drive cycle tracking.
  • Development Environment: MATLAB/Simulink with CUDA integration.
  • Processor in the Loop (HiL): NVIDIA Graphics Card (NVIDIA GTX 1050 Ti) as DRIVE AGX Thor proxy.

Project Goals

Emulate NVIDIA DRIVE AGX Thor's communication to the device.

The primary objective of this project is to emulate the integration workflow of NVIDIA DRIVE ecoysystem using a consumer-grade NVIDIA GPU. The goal is to replicate, at a smaller scale, how planning and control algorithms are deployed, executed, and validated on NVIDIA’s autonomous driving platforms within a Processor-in-the-Loop (PiL) simulation environment.

Hierarchical Architecture for Planning and Control

Figure 2: NVIDIA's 3 computer solution for autonomous vehicle .

Implement and optimize MPC algorithms for GPU execution.

In parallel, this project also focuses on optimizing a Model Predictive Control (MPC) algorithm for execution on a GPU. By translating a MATLAB-based MPC formulation into CUDA using GPU Coder, the project demonstrates how real-time control taks can be offloaded to GPU hardware to improve on performance.

Simulink block diagram with the control algorithm implementation

Figure 3: SIMULINK Block Diagram of a Model Predictive Controler for torque control on a 1 DOF Vehicle.

Core Project Components

Control Logic (MATLAB)

The core Model Predictive Control algorithm. It implements a condensed Quadratic Programming (QP) formulation to calculate optimal torque inputs based on the reference velocity and current state. This is the source file used by GPU Coder to generate the CUDA kernels.

GPU Acceleration (CUDA)

The folder src/scripts/computations/CUDA/ contains the generated C++/CUDA source code (.cu, .h). These files represent the optimized kernels that execute the MPC prediction and cost evaluation in parallel on the NVIDIA GTX 1050 Ti.

Simulation Environment (Simulink):

The main file src/simulations/main.slx is the top-level simulation harness. It integrates the vehicle dynamics plant, the drive cycle reference generator, and the controller into a complete closed-loop simulation. The file src/models/modelPredictiveController.slx is the specific controller subsystem MATLAB model. This model wraps the MATLAB Function block.

References

NVIDIA On Demand Explanation Video

This project is inspired by NVIDIA's developments in autonomous vehicle computing, particularly their DRIVE AGX Thor platform. For more information, see NVIDIA's presentation on autonomous driving solutions.

Hierarchical Architecture for Planning and Control

Figure 4: NVIDIA Autonomous Driving Planning and Control Architecture.

Mathworks MATLAB to CUDA translation process

The Deploy MATLAB and Simulink to NVIDIA GPUs provides a comprehensive overview of the workflow for generating optimized CUDA code from MATLAB and Simulink and is what served as a guide for the development of this project. It demonstrates how to use GPU Coder to translate high-level MATLAB code into CUDA kernels that can be deployed directly onto NVIDIA GPUs for accelerated performance.

Hierarchical Architecture for Planning and Control

Figure 5: Simulink workflow for development and deployment of GPU applications.

Tools and Setup

MATLAB/SIMULINK Toolboxes

C/C++ Compiler

To generate and compile CUDA code from MATLAB/Simulink, you must install a supported C++ compiler. You cannot simply install the Microsoft Visual C++ Redistributables; you must install the full Visual Studio IDE. This installation provides the necessary cl.exe compiler, linker, and build toolchain that MATLAB's GPU Coder requires to translate MATLAB code into CUDA kernels and compile them for your NVIDIA GPU.

If you are using a recent version of MATLAB, you may encounter the following error when compiling with recent Visual Studio versions:

fatal error C1189: #error: -- unsupported Microsoft Visual Studio version!

This occurs because recent VS 2022 updates increased the compiler version (v19.40+) beyond what the current CUDA Toolkit supports.

To resolve this, you must install a previous version of Visual Studio. Since the 2019 Community edition is no longer easily accessible, install Visual Studio Professional 2019. You generally do not need an active Professional subscription just to install the C++ toolchains required for compilation.

Download Link: Visual Studio 2019 Release History

About

Simulates NVIDIA DRIVE AGX Thor parallelism on a GTX 1050 Ti using a MATLAB/Simulink MPC for longitudinal control, generating CUDA code with GPU Coder to compare GPU vs CPU performance.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published