Skip to content
This repository was archived by the owner on Mar 7, 2020. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion lib/cuda/driver/function.rb
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ def launch_grid(xdim, ydim = 1)
# will execute on the default stream 0.
# @overload launch_grid_async(xdim, stream)
# @overload launch_grid_async(xdim, ydim, stream)
# @param [Integer] xdim The x dimensional size
# @param [Integer] xdim The x dimensional size
def launch_grid_async(xdim, ydim = 1, stream)
s = Pvt::parse_stream(stream)
status = API::cuLaunchGridAsync(self.to_api, xdim, ydim, s)
Expand Down
2 changes: 1 addition & 1 deletion lib/cuda/runtime/memory.rb
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class CudaDeviceMemory

# Allocate memory on the device.
# @param [Integer] nbytes The number of bytes of memory to allocate.
# @return [*SGC::Memory::MemoryPointer] A memory pointer to the allocated device memory.
# @return [*SGC::Memory::MemoryPointer] A memory pointer to the allocated device memory.
#
# @note The returned memory pointer is enabled to call _free_ method on itself.
def self.malloc(nbytes)
Expand Down
6 changes: 3 additions & 3 deletions lib/cuda/runtime/stream.rb
Original file line number Diff line number Diff line change
Expand Up @@ -86,9 +86,9 @@ def wait_event(event, flags = 0)


# Let all future operations submitted to any CUDA stream wait until _event_ complete before beginning execution.
# @overload wait_event(event)
# @overload wait_event(event, flags)
# @param (see CudaStream#wait_event)
# @overload wait_event(event)
# @overload wait_event(event, flags)
# @param (see CudaStream#wait_event)
def self.wait_event(event, flags = 0)
status = API::cudaStreamWaitEvent(nil, event.to_api, flags)
Pvt::handle_error(status, "Failed to make any CUDA stream's future operations to wait event: flags = #{flags}.")
Expand Down