Caution
This release focuses on the C++ API of hipMM. While the Python build process has been ported and binary wheels can be created, porting the Python component of this project is still ongoing work.
Note
This ROCm™ port is derived from the NVIDIA RAPIDS® RMM project (version 25.02). It aims to follow the latter's directory structure, file naming and API naming as closely as possible to minimize porting friction for users that are interested in using both projects.
Achieving optimal performance in GPU-centric workflows frequently requires customizing how host and device memory are allocated. For example, using "pinned" host memory for asynchronous host <-> device memory transfers, or using a device memory pool sub-allocator to reduce the cost of dynamic device memory allocation.
The goal of the ROCm-DS HIP Memory Manager (hipMM) is to provide:
- A common interface that allows customizing device and host memory allocation
- A collection of implementations of the interface
- A collection of data structures that use the interface for memory allocation
For information on the interface hipMM provides and how to use hipMM in your C++ code, see below.
For a walkthrough about the design of the RAPIDS Memory Manager, read Fast, Flexible Allocation for NVIDIA with RAPIDS Memory Manager on the NVIDIA Developer Blog.
Note
We support only AMD GPUs. Use the NVIDIA RAPIDS package for NVIDIA GPUs.
Note
Currently, it is not possible to install hipMM via conda.
See install instructions.
See build instructions.
hipMM uses CPM.cmake to
handle third-party dependencies like spdlog, Thrust, GoogleTest,
GoogleBenchmark. In general you won't have to worry about it. If CMake
finds an appropriate version on your system, it uses it (you can
help it along by setting CMAKE_PREFIX_PATH to point to the
installed location). Otherwise those dependencies will be downloaded as
part of the build.
If you frequently start new builds from scratch, consider setting the
environment variable CPM_SOURCE_CACHE to an external download
directory to avoid repeated downloads of the third-party dependencies.
The installed hipMM library provides a set of config files that makes it easy to
integrate hipMM into your own CMake project. Add the following to CMakeLists.txt:
find_package(rmm [VERSION])
# ...
target_link_libraries(<your-target> (PRIVATE|PUBLIC|INTERFACE) rmm::rmm)Since hipMM is a header-only library, this does not actually link hipMM,
but it makes the headers available and pulls in transitive dependencies.
If hipMM is not installed in a default location, use
CMAKE_PREFIX_PATH or rmm_ROOT to point to its location.
One of hipMM's dependencies is the Thrust library, so the above
automatically pulls in Thrust by means of a dependency on the
rmm::Thrust target. By default it uses the standard configuration of
Thrust. If you want to customize it, you can set the variables
THRUST_HOST_SYSTEM and THRUST_DEVICE_SYSTEM; see
Thrust's CMake documentation.
RMM uses CPM.cmake to manage its dependencies, including CCCL, and you can use CPM for your project's dependency on RMM.
There is an issue with using CPM's single-argument compact syntax for
RMM/CCCL as it transitively marks targets as SYSTEM dependencies.
This causes the CCCL headers pulled in through CPM to be of lower priority
to the preprocessor than the (potentially outdated) CCCL headers provided
by the CUDA SDK. To avoid this issue, use CPM's multi-argument syntax
instead:
CPMAddPackage(NAME rmm [VERSION]
GITHUB_REPOSITORY rapidsai/rmm
SYSTEM Off)
# ...
target_link_libraries(<your-target> (PRIVATE|PUBLIC|INTERFACE) rmm::rmm)The first goal of hipMM is to provide a common interface for device and host memory allocation. This allows both users and implementers of custom allocation logic to program to a single interface.
To this end, hipMM defines two abstract interface classes:
rmm::mr::device_memory_resourcefor device memory allocationrmm::mr::host_memory_resourcefor host memory allocation
These classes are based on the
std::pmr::memory_resource interface
class introduced in C++17 for polymorphic memory allocation.
rmm::mr::device_memory_resource is the base class that defines the interface for allocating and
freeing device memory.
It has two key functions:
-
void* device_memory_resource::allocate(std::size_t bytes, cuda_stream_view s)- Returns a pointer to an allocation of at least
bytesbytes.
- Returns a pointer to an allocation of at least
-
void device_memory_resource::deallocate(void* p, std::size_t bytes, cuda_stream_view s)- Reclaims a previous allocation of size
bytespointed to byp. pmust have been returned by a previous call toallocate(bytes), otherwise behavior is undefined
- Reclaims a previous allocation of size
It is up to a derived class to provide implementations of these functions. See
available resources for example device_memory_resource derived classes.
Unlike std::pmr::memory_resource, rmm::mr::device_memory_resource does not allow specifying an
alignment argument. All allocations are required to be aligned to at least 256B. Furthermore,
device_memory_resource adds an additional cuda_stream_view argument to allow specifying the stream
on which to perform the (de)allocation.
rmm::mr::device_memory_resource is a base class that provides stream-ordered memory allocation.
This allows optimizations such as re-using memory deallocated on the same stream without the
overhead of synchronization.
A call to device_memory_resource::allocate(bytes, stream_a) returns a pointer that is valid to use
on stream_a. Using the memory on a different stream (say stream_b) is Undefined Behavior unless
the two streams are first synchronized, for example by using hipStreamSynchronize(stream_a) or by
recording a event on stream_a and then calling hipStreamWaitEvent(stream_b, event).
The stream specified to device_memory_resource::deallocate should be a stream on which it is valid
to use the deallocated memory immediately for another allocation. Typically this is the stream
on which the allocation was last used before the call to deallocate. The passed stream may be
used internally by a device_memory_resource for managing available memory with minimal
synchronization, and it may also be synchronized at a later time, for example using a call to
hipStreamSynchronize().
For this reason, it is Undefined Behavior to destroy a stream that is passed to
device_memory_resource::deallocate. If the stream on which the allocation was last used has been
destroyed before calling deallocate or it is known that it will be destroyed, it is likely better
to synchronize the stream (before destroying it) and then pass a different stream to deallocate
(e.g. the default stream).
Note that device memory data structures such as rmm::device_buffer and rmm::device_uvector
follow these stream-ordered memory allocation semantics and rules.
For further information about stream-ordered memory allocation semantics, read Using the NVIDIA Stream-Ordered Memory Allocator on the NVIDIA Developer Blog.
hipMM provides several device_memory_resource derived classes to satisfy various user requirements.
For more detailed information about these resources, see their respective documentation.
Allocates and frees device memory using hipMalloc and hipFree.
Allocates and frees device memory using hipMallocManaged and hipFree.
Note that managed_memory_resource cannot be used with NVIDIA Virtual GPU Software (vGPU, for use
with virtual machines or hypervisors) because NVIDIA Unified Memory is not supported by
NVIDIA vGPU.
A coalescing, best-fit pool sub-allocator.
A memory resource that can only allocate a single fixed size. Average allocation and deallocation cost is constant.
Configurable to use multiple upstream memory resources for allocations that fall within different
bin sizes. Often configured with multiple bins backed by fixed_size_memory_resources and a single
pool_memory_resource for allocations larger than the largest bin size.
hipMM users commonly need to configure a device_memory_resource object to use for all allocations
where another resource has not explicitly been provided. A common example is configuring a
pool_memory_resource to use for all allocations to get fast dynamic allocation.
To enable this use case, hipMM provides the concept of a "default" device_memory_resource. This
resource is used when another is not explicitly provided.
Accessing and modifying the default resource is done through two functions:
-
device_memory_resource* get_current_device_resource()- Returns a pointer to the default resource for the current device.
- The initial default memory resource is an instance of
cuda_memory_resource. - This function is thread safe with respect to concurrent calls to it and
set_current_device_resource(). - For more explicit control, you can use
get_per_device_resource(), which takes a device ID.
-
device_memory_resource* set_current_device_resource(device_memory_resource* new_mr)- Updates the default memory resource pointer for the current device to
new_mr - Returns the previous default resource pointer
- If
new_mrisnullptr, then resets the default resource tocuda_memory_resource - This function is thread safe with respect to concurrent calls to it and
get_current_device_resource() - For more explicit control, you can use
set_per_device_resource(), which takes a device ID.
- Updates the default memory resource pointer for the current device to
rmm::mr::cuda_memory_resource cuda_mr;
// Construct a resource that uses a coalescing best-fit pool allocator
// With the pool initially half of available device memory
auto initial_size = rmm::percent_of_free_device_memory(50);
rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource> pool_mr{&cuda_mr, initial_size};
rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr`
rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource(); // Points to `pool_mr`A device_memory_resource should only be used when the active device is the same device
that was active when the device_memory_resource was created. Otherwise behavior is undefined.
If a device_memory_resource is used with a stream associated with a different device than the
device for which the memory resource was created, behavior is undefined.
Creating a device_memory_resource for each device requires care to set the current device before
creating each resource, and to maintain the lifetime of the resources as long as they are set as
per-device resources. Here is an example loop that creates unique_ptrs to pool_memory_resource
objects for each device and sets them as the per-device resource for that device.
using pool_mr = rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource>;
std::vector<unique_ptr<pool_mr>> per_device_pools;
for(int i = 0; i < N; ++i) {
hipSetDevice(i); // set device i before creating MR
// Use a vector of unique_ptr to maintain the lifetime of the MRs
// Note: for brevity, omitting creation of upstream and computing initial_size
per_device_pools.push_back(std::make_unique<pool_mr>(upstream, initial_size));
// Set the per-device resource for device i
set_per_device_resource(cuda_device_id{i}, &per_device_pools.back());
}Note that the device that is current when creating a device_memory_resource must also be
current any time that device_memory_resource is used to deallocate memory, including in a
destructor. The RAII class rmm::device_buffer and classes that use it as a backing store
(rmm::device_scalar and rmm::device_uvector) handle this by storing the active device when the
constructor is called, and then ensuring that the stored device is active whenever an allocation or
deallocation is performed (including in the destructor). The user must therefore only ensure that
the device active during creation of an rmm::device_buffer matches the active device of the
memory resource being used.
Here is an incorrect example that creates a memory resource on device 0 and then uses it to
allocate a device_buffer on device 1:
{
RMM_CUDA_TRY(hipSetDevice(0));
auto mr = rmm::mr::cuda_memory_resource{};
{
RMM_CUDA_TRY(hipSetDevice(1));
// Invalid, current device is 1, but MR is only valid for device 0
rmm::device_buffer buf(16, rmm::cuda_stream_default, &mr);
}
}A correct example creates the device buffer with device 0 active. After that it is safe to switch
devices and let the buffer go out of scope and destruct with a different device active. For example,
this code is correct:
{
RMM_CUDA_TRY(hipSetDevice(0));
auto mr = rmm::mr::cuda_memory_resource{};
rmm::device_buffer buf(16, rmm::cuda_stream_default, &mr);
RMM_CUDA_TRY(hipSetDevice(1));
...
// No need to switch back to device 0 before ~buf runs
}rmm:device_vector uses an rmm::mr::thrust_allocator to enable thrust::device_vector to
allocate and deallocate memory using hipMM. As such, the usual rules for usage of the backing memory
resource apply: the active device must match the active device at resource construction time. To
facilitate use in an RAII setting, rmm::mr::thrust_allocator records the active device at
construction time and ensures that device is active whenever it allocates or deallocates memory.
Usage of rmm::device_vector with multiple devices is therefore the same as rmm::device_buffer.
One must create device_vectors with the correct device active, but it is safe to destroy them
with a different active device.
For example, recapitulating the previous example using rmm::device_vector:
{
RMM_CUDA_TRY(hipSetDevice(0));
auto mr = rmm::mr::cuda_memory_resource{};
rmm::device_vector<int> vec(16, rmm::mr::thrust_allocator<int>(rmm::cuda_stream_default, &mr));
RMM_CUDA_TRY(hipSetDevice(1));
...
// No need to switch back to device 0 before ~vec runs
}Note
Although allocation and deallocation in the thrust_allocator run with the correct active device,
modification of rmm::device_vector might necessitate a kernel launch, and this must run with the
correct device active. For example, .resize() might both allocate and launch a kernel to
initialize new elements: the user must arrange for this kernel launch to occur with the correct
device for the memory resource active.
rmm::cuda_stream_view is a simple non-owning wrapper around a CUDA hipStream_t. This wrapper's
purpose is to provide strong type safety for stream types. (hipStream_t is an alias for a pointer,
which can lead to ambiguity in APIs when it is assigned 0.) All hipMM stream-ordered APIs take a
rmm::cuda_stream_view argument.
rmm::cuda_stream is a simple owning wrapper around a CUDA hipStream_t. This class provides
RAII semantics (constructor creates the CUDA stream, destructor destroys it). An rmm::cuda_stream
can never represent the CUDA default stream or per-thread default stream; it only ever represents
a single non-default stream. rmm::cuda_stream cannot be copied, but can be moved.
rmm::cuda_stream_pool provides fast access to a pool of CUDA streams. This class can be used to
create a set of cuda_stream objects whose lifetime is equal to the cuda_stream_pool. Using the
stream pool can be faster than creating the streams on the fly. The size of the pool is configurable.
Depending on this size, multiple calls to cuda_stream_pool::get_stream() may return instances of
rmm::cuda_stream_view that represent identical CUDA streams.
All current device memory resources are thread safe unless documented otherwise. More specifically,
calls to memory resource allocate() and deallocate() methods are safe with respect to calls to
either of these functions from other threads. They are not thread safe with respect to
construction and destruction of the memory resource object.
Note that a class thread_safe_resource_adapter is provided which can be used to adapt a memory
resource that is not thread safe to be thread safe (as described above). This adapter is not needed
with any current hipMM device memory resources.
C++ interfaces commonly allow customizable memory allocation through an Allocator object.
hipMM provides several Allocator and Allocator-like classes.
A stream-ordered allocator similar to std::pmr::polymorphic_allocator.
Unlike the standard C++ Allocator interface, the allocate and deallocate functions take a cuda_stream_view indicating the stream on which the (de)allocation occurs.
stream_allocator_adaptor can be used to adapt a stream-ordered allocator to present a standard Allocator interface to consumers that may not be designed to work with a stream-ordered interface.
Example:
rmm::cuda_stream stream;
rmm::mr::polymorphic_allocator<int> stream_alloc;
// Constructs an adaptor that forwards all (de)allocations to `stream_alloc` on `stream`.
auto adapted = rmm::mr::stream_allocator_adaptor(stream_alloc, stream);
// Allocates 100 bytes using `stream_alloc` on `stream`
auto p = adapted.allocate(100);
...
// Deallocates using `stream_alloc` on `stream`
adapted.deallocate(p,100);thrust_allocator is a device memory allocator that uses the strongly typed thrust::device_ptr, making it usable with containers like thrust::device_vector.
See below for more information on using hipMM with Thrust.
An untyped, uninitialized RAII class for stream ordered device memory allocation.
cuda_stream_view s{...};
// Allocates at least 100 bytes on stream `s` using the *default* resource
rmm::device_buffer b{100,s};
void* p = b.data(); // Raw, untyped pointer to underlying device memory
kernel<<<..., s.value()>>>(b.data()); // `b` is only safe to use on `s`
rmm::mr::device_memory_resource * mr = new my_custom_resource{...};
// Allocates at least 100 bytes on stream `s` using the resource `mr`
rmm::device_buffer b2{100, s, mr};A typed, uninitialized RAII class for allocation of a contiguous set of elements in device memory.
Similar to a thrust::device_vector, but as an optimization, does not default initialize the
contained elements. This optimization restricts the types T to trivially copyable types.
cuda_stream_view s{...};
// Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the
// default resource
rmm::device_uvector<int32_t> v(100, s);
// Initializes the elements to 0
thrust::uninitialized_fill(thrust::cuda::par.on(s.value()), v.begin(), v.end(), int32_t{0});
rmm::mr::device_memory_resource * mr = new my_custom_resource{...};
// Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the resource `mr`
rmm::device_uvector<int32_t> v2{100, s, mr};A typed, RAII class for allocation of a single element in device memory.
This is similar to a device_uvector with a single element, but provides convenience functions like
modifying the value in device memory from the host, or retrieving the value from device to host.
cuda_stream_view s{...};
// Allocates uninitialized storage for a single `int32_t` in device memory
rmm::device_scalar<int32_t> a{s};
a.set_value(42, s); // Updates the value in device memory to `42` on stream `s`
kernel<<<...,s.value()>>>(a.data()); // Pass raw pointer to underlying element in device memory
int32_t v = a.value(s); // Retrieves the value from device to host on stream `s`rmm::mr::host_memory_resource is the base class that defines the interface for allocating and
freeing host memory.
Similar to device_memory_resource, it has two key functions for (de)allocation:
-
void* host_memory_resource::allocate(std::size_t bytes, std::size_t alignment)- Returns a pointer to an allocation of at least
bytesbytes aligned to the specifiedalignment
- Returns a pointer to an allocation of at least
-
void host_memory_resource::deallocate(void* p, std::size_t bytes, std::size_t alignment)- Reclaims a previous allocation of size
bytespointed to byp.
- Reclaims a previous allocation of size
Unlike device_memory_resource, the host_memory_resource interface and behavior is identical to
std::pmr::memory_resource.
Uses the global operator new and operator delete to allocate host memory.
Allocates "pinned" host memory using cuda(Malloc/Free)Host.
hipMM does not currently provide any data structures that interface with host_memory_resource.
In the future, hipMM will provide a similar host-side structure like device_buffer and an allocator
that can be used with STL containers.
RAPIDS and other libraries make heavy use of Thrust. Thrust uses device memory in two situations:
- As the backing store for
thrust::device_vector, and - As temporary storage inside some algorithms, such as
thrust::sort.
hipMM provides rmm::mr::thrust_allocator as a conforming Thrust allocator that uses
device_memory_resources.
To instruct a Thrust algorithm to use rmm::mr::thrust_allocator to allocate temporary storage, you
can use the custom Thrust device execution policy: rmm::exec_policy(stream).
thrust::sort(rmm::exec_policy(stream, ...);The first stream argument is the stream to use for rmm::mr::thrust_allocator.
The second stream argument is what should be used to execute the Thrust algorithm.
These two arguments must be identical.
hipMM includes two forms of logging. Memory event logging and debug logging.
Memory event logging writes details of every allocation or deallocation to a CSV (comma-separated
value) file. In C++, Memory Event Logging is enabled by using the logging_resource_adaptor as a
wrapper around any other device_memory_resource object.
Each row in the log represents either an allocation or a deallocation. The columns of the file are "Thread, Time, Action, Pointer, Size, Stream".
The CSV output files of the logging_resource_adaptor can be used as input to REPLAY_BENCHMARK,
which is available when building hipMM from source, in the gbenchmarks folder in the build directory.
This log replayer can be useful for profiling and debugging allocator issues.
The following C++ example creates a logging version of a cuda_memory_resource that outputs the log
to the file "logs/test1.csv".
std::string filename{"logs/test1.csv"};
rmm::mr::cuda_memory_resource upstream;
rmm::mr::logging_resource_adaptor<rmm::mr::cuda_memory_resource> log_mr{&upstream, filename};If a file name is not specified, the environment variable RMM_LOG_FILE is queried for the file
name. If RMM_LOG_FILE is not set, then an exception is thrown by the logging_resource_adaptor
constructor.
In Python, memory event logging is enabled when the logging parameter of rmm.reinitialize() is
set to True. The log file name can be set using the log_file_name parameter. See
help(rmm.reinitialize) for full details.
hipMM includes a debug logger which can be enabled to log trace and debug information to a file. This
information can show when errors occur, when additional memory is allocated from upstream resources,
etc. By default output is logged to stderr, but the environment variable
RMM_DEBUG_LOG_FILE can be set to specify a path and file name to dump the logs to instead.
There is a CMake configuration variable RMM_LOGGING_LEVEL, which can be set to enable compilation
of more detailed logging. The default is INFO. Available levels are TRACE, DEBUG, INFO,
WARN, ERROR, CRITICAL and OFF.
The log relies on the spdlog library.
Note that to see logging below the INFO level, the application must also set the logging level at
run time. C++ applications must must call rmm::default_logger().set_level(), for example to enable all
levels of logging down to TRACE, call rmm::default_logger().set_level(spdlog::level::trace) (and compile
librmm with -DRMM_LOGGING_LEVEL=TRACE). Python applications must call rmm.set_logging_level(),
for example to enable all levels of logging down to TRACE, call rmm.set_logging_level("trace")
(and compile the RMM Python module with -DRMM_LOGGING_LEVEL=TRACE).
Note that debug logging is different from the CSV memory allocation logging provided by
rmm::mr::logging_resource_adapter. The latter is for logging a history of allocation /
deallocation actions which can be useful for replay with hipMM's replay benchmark.
Memory allocations taken from a memory resource that allocates a pool of memory (such as
pool_memory_resource and arena_memory_resource) are part of the same low-level memory
allocation. Therefore, out-of-bounds or misaligned accesses to these allocations are not likely to
be detected by tools such as
Compute Sanitizer memcheck.
Exceptions to this are cuda_memory_resource, which wraps hipMalloc, and
cuda_async_memory_resource, which uses hipMallocAsync with the device runtime's built-in memory pool
functionality (11.2 or later required). Illegal memory accesses to memory allocated by these
resources are detectable with Compute Sanitizer Memcheck.
It may be possible in the future to add support for memory bounds checking with other memory resources using NVTX APIs.
There are two ways to use hipMM in Python code:
- Using the
rmm.DeviceBufferAPI to explicitly create and manage device memory allocations - Transparently via external libraries such as CuPy and Numba
hipMM provides a MemoryResource abstraction to control how device
memory is allocated in both the above uses.
A DeviceBuffer represents an untyped, uninitialized device memory allocation. DeviceBuffers can be created by providing the size of the allocation in bytes:
>>> import rmm
>>> buf = rmm.DeviceBuffer(size=100)The size of the allocation and the memory address associated with it
can be accessed via the .size and .ptr attributes respectively:
>>> buf.size
100
>>> buf.ptr
140202544726016DeviceBuffers can also be created by copying data from host memory:
>>> import rmm
>>> import numpy as np
>>> a = np.array([1, 2, 3], dtype='float64')
>>> buf = rmm.DeviceBuffer.to_device(a.tobytes())
>>> buf.size
24Conversely, the data underlying a DeviceBuffer can be copied to the host:
>>> np.frombuffer(buf.tobytes())
array([1., 2., 3.])MemoryResource objects are used to configure how device memory allocations are made by
hipMM.
By default if a MemoryResource is not set explicitly, hipMM uses the CudaMemoryResource, which
uses hipMalloc for allocating device memory.
rmm.reinitialize() provides an easy way to initialize hipMM with specific memory resource options
across multiple devices. See help(rmm.reinitialize) for full details.
For lower-level control, the rmm.mr.set_current_device_resource() function can be
used to set a different MemoryResource for the current device. For
example, enabling the ManagedMemoryResource tells hipMM to use
hipMallocManaged instead of hipMalloc for allocating memory:
>>> import rmm
>>> rmm.mr.set_current_device_resource(rmm.mr.ManagedMemoryResource())
⚠️ The default resource must be set for any device before allocating any device memory on that device. Setting or changing the resource after device allocations have been made can lead to unexpected behaviour or crashes. See Multiple Devices
As another example, PoolMemoryResource allows you to allocate a
large "pool" of device memory up-front. Subsequent allocations will
draw from this pool of already allocated memory. The example
below shows how to construct a PoolMemoryResource with an initial size
of 1 GiB and a maximum size of 4 GiB. The pool uses
CudaMemoryResource as its underlying ("upstream") memory resource:
>>> import rmm
>>> pool = rmm.mr.PoolMemoryResource(
... rmm.mr.CudaMemoryResource(),
... initial_pool_size="1GiB", # equivalent to initial_pool_size=2**30
... maximum_pool_size="4GiB"
... )
>>> rmm.mr.set_current_device_resource(pool)Other MemoryResources include:
FixedSizeMemoryResourcefor allocating fixed blocks of memoryBinningMemoryResourcefor allocating blocks within specified "bin" sizes from different memory resources
MemoryResources are highly configurable and can be composed together in different ways.
See help(rmm.mr) for more information.