A Dockerized environment for benchmarking the performance of various ROS 2 Middleware (RMW) implementations.
This repository provides a framework to build and run performance tests for various ROS 2 middleware implementations. It uses Docker to create a consistent and reproducible environment for running benchmarks. The main goal is to compare the performance of different RMWs under various conditions, such as message sizes, communication patterns (pub/sub, client/server), and number of nodes.
The benchmark results can be used to:
- Evaluate the performance of a specific RMW.
- Compare the performance of different RMWs.
- Identify performance bottlenecks in a ROS 2 system.
- ROS 2 Benchmark Container
- Docker engine with buildkit capabilities.
-
Clone the repository and initialize submodules:
git clone <repository-url> cd ros2_benchmark_container git submodule update --init --recursive
-
Set up the Buildkit builder:
It's recommended to use a Buildkit builder for faster and more efficient builds.
docker buildx create --use
The following ROS 2 distributions are supported:
jazzy(default)kiltedrolling
Convenience scripts are provided in the docker/ directory to simplify the process of building, running, deploying and attaching to the benchmark container. Make sure they are executable:
chmod +x docker/build docker/run docker/attach docker/deployThe docker/build script automates the process of building the container, including all benchmarking packages and sourcing environment variables.
From the root of this repository, run:
docker/buildThis will automatically build containers for the ROS 2 distributions specified in docker-bake.hcl. By default, this is jazzy.
To build for an alternate distro, use the -d flag:
docker/build -d rollingBy default, containers are built for the host architecture. To build a container compatible with arm64 architecture, pass arm64 as an argument to the -a flag:
docker/build -a arm64Note that arm64 builds are currently much slower than amd64, as buildkit makes use of QEMU for emulation based cross-building.
-
Attach to the container:
The
docker/runscript starts the container and attaches a shell to it. It also mounts thebenchmark_results/${ROS_DISTRO}directory from your host into the container's/benchmark_resultsdirectory to persist the results.docker/run -d <ros-distro>
For example, to run the
kiltedcontainer:docker/run -d kilted
If no distribution is specified, it defaults to
jazzy. -
Run all benchmarks:
Once inside the container, you can use the
run_all_benchmarksalias to execute all benchmarks:run_all_benchmarks
This will take a significant amount of time to complete. To perform a quick test run, you can specify a shorter duration with the
-tflag (e.g., 1 second per test):run_all_benchmarks -t 1
The script will create a new directory inside
/benchmark_results(e.g.,results_07_01_25_00h25) containing the raw results. By default, it will also automatically generate post-processed results. To disable this, use the--no-resultsflag.NOTE - If you have included rmw_zenoh in your test matrix, the router will automatically spawn in the background before the benchmarks run, and be automatically killed on exit. For a given test matrix, a custom router config can be specified with
ZENOH_ROUTER_CONFIG_URI. In the absence of one, the defaultprofiles/ZENOH_ROUTER_DEFAULT_CONFIG.json5is used.
At the end of a successful benchmark run, the results are automatically processed. The output directory will contain:
raw_results/: The raw data from the performance tests.parsed_results/: Generated plots (.pngfiles) and summary data in CSV format.report_<date>.pdf: A comprehensive PDF report summarizing the results with plots and analysis. For example:report_06_01_26_13h11.pdf.
If you run the benchmarks with --no-results, you can manually generate the analysis by running the generate_all_metrics alias:
generate_all_metrics /benchmark_results/<results-directory>Replace <results-directory> with the actual directory name (e.g., results_07_01_25_00h25).
The Docker container provides a standardized environment with the following key components:
- ROS 2 Distributions:
Jazzy Jalisco(default). The supported distributions can be configured indocker-bake.hcl. - RMW Implementations:
rmw_fastrtps_cpp(default in ROS 2)rmw_cyclonedds_cpprmw_zenoh_cpp
- Benchmarking Tools:
- ros2-performance: The underlying framework used to create and run the performance tests. This is a public iRobot repository, and the code is included in this project as a git submodule in the
external/folder.
- ros2-performance: The underlying framework used to create and run the performance tests. This is a public iRobot repository, and the code is included in this project as a git submodule in the
- Analysis Tools:
pandas,numpy,matplotlib,scipy: For data manipulation, analysis, and plotting.reportlab: For generating the final PDF report.
The executor implementation used for the benchmarks is configurable by the user when launching the container.
By default, the EventsExecutor is used, but you can specify an alternate executor via the -x flag, e.g.:
docker/run -x SingleThreadedExecutorBy default, available executors are the SingleThreadedExecutor, EventsExecutor and MultiThreadedExecutor. If you have an executor from a different package that you'd like to benchmark, add it as a submodule in /external, include it in the package.xml and CMakeLists.txt for ros2-performance, add it to the list of available executors and extend the list of executors in run_single_process_benchmark and run_multi_process_benchmark.
An example for how to modify ros2-performance to add a new executor can be found here.
The benchmark/test-matrix directory contains configuration files that define the test matrices for the benchmarks. Each file specifies a set of tests to be run with different configurations.
To create a new test matrix, you can create a new .conf file in this directory. The file should define the following variables:
OUTPUT_DIR_NAME: The name of the directory where the benchmark results will be saved.TOPOLOGIES: A list of topology files to be used in the benchmark.TOPOLOGIES_DIR: The directory where the topology files are located.PROFILES_DIR: The directory where the RMW profile files are located.RMW_LIST: A list of the RMW implementations to be tested.COMMS_<rmw>: For each RMW inRMW_LIST, this variable defines the communication modes to be tested (e.g.,ipc_on,ipc_off,loaned).LOANED_ENV_VARS_<rmw>: For each RMW, this variable can be used to export environment variables required for running with loaned messages.
For example, the single_process_pub_sub.conf file defines a test matrix for single-process pub/sub benchmarks. After creating a new test matrix, you need to add a call to run_benchmark in the run_all_benchmarks.sh script to execute it.
The benchmark/topologies directory contains the JSON files that define the architecture of the nodes to be benchmarked.
A python script generate_topology.py is provided to generate new topologies.
For example, to generate a topology with 10 topics at 50Hz in a single process:
python3 benchmark/topologies/generate_topology.py --num-topics 10 --freq 50 --process-mode single --output my_new_topologyThis will generate my_new_topology.json and my_new_topology_loaned.json files in the current directory. You can then move these files to the appropriate topology directory.
Once the topology is created, you need to add it to the desired test matrix configuration file in the benchmark/test-matrix directory to include it in the benchmarks.
To add a new RMW implementation to the benchmark suite:
- Update Dockerfile: Add the installation command for the new RMW package in the
Dockerfile. - Update test matrices: Modify the
.conffiles inbenchmark/test-matrixto include the new RMW in theRMW_LISTvariable. - (Optional) Define RMW-specific variables: For a new RMW named
my_middleware, you may need to define:COMMS_my_middleware: Permutations of communication modes to test (e.g.,ipc_on,ipc_off).LOANED_ENV_VARS_my_middleware: Environment variables for loaned messages.
- (Optional) Add XML profiles: If the new RMW requires specific configuration, add new XML profiles in the
benchmark/profilesdirectory.
Convenience tools are included to deploy benchmark containers built on this host to another host. Checks are automatically run to make sure the target arch matches the docker image arch. A common use case might be benchmarking on more constrained hardware with less build capabilities, like a raspberry pi.
-
Ensure the remote host has this repo: make sure the
ros2-benchmark-containerrepo is available on the remote machine. -
Ensure docker and ssh access:
docker/deploywill automatically SSH into the remote host to install the docker image from a file. Please ensure you have SSH access to the target host and that it is capable of running docker without sudo. -
Build and deploy your image: For example, to deploy a
kiltedcontainer to a remote host witharm64architecture, you would dodocker/build -d kilted -a arm64 docker/deploy -d kilted -a arm64 -u REMOTE_USER -h REMOTE_IP
.
├── benchmark/ # Scripts and configurations for running benchmarks
├── docker/ # Helper scripts for building and running the Docker container
├── external/ # Git submodules for external projects (e.g., ros2-performance)
├── Dockerfile # Dockerfile for building the benchmark environment
├── docker-bake.hcl # Docker bake file for multi-platform builds
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE # Project license
└── README.md # This file
Contributions are welcome! Please read the CONTRIBUTING.md file for guidelines on how to contribute to this project.
This project is licensed under the terms of the LICENSE file.