diff --git a/1-CUDA-Introduction-1.pptx b/1-CUDA-Introduction-1.pptx new file mode 100644 index 0000000..84caaa1 Binary files /dev/null and b/1-CUDA-Introduction-1.pptx differ diff --git a/2-GPU-Architecture-Overview.pptx b/2-GPU-Architecture-Overview.pptx new file mode 100644 index 0000000..3294d91 Binary files /dev/null and b/2-GPU-Architecture-Overview.pptx differ diff --git a/3-Parallel-Algorithms-1.pptx b/3-Parallel-Algorithms-1.pptx new file mode 100644 index 0000000..e27edac Binary files /dev/null and b/3-Parallel-Algorithms-1.pptx differ diff --git a/BugPNG.PNG b/BugPNG.PNG new file mode 100644 index 0000000..9798ae4 Binary files /dev/null and b/BugPNG.PNG differ diff --git a/Profiling.pdf b/Profiling.pdf new file mode 100644 index 0000000..5374b74 Binary files /dev/null and b/Profiling.pdf differ diff --git a/Profiling.xlsx b/Profiling.xlsx new file mode 100644 index 0000000..b587166 Binary files /dev/null and b/Profiling.xlsx differ diff --git a/Profiling_Page_1.png b/Profiling_Page_1.png new file mode 100644 index 0000000..766b97c Binary files /dev/null and b/Profiling_Page_1.png differ diff --git a/Profiling_Page_2.png b/Profiling_Page_2.png new file mode 100644 index 0000000..54a4816 Binary files /dev/null and b/Profiling_Page_2.png differ diff --git a/Profiling_Page_3.png b/Profiling_Page_3.png new file mode 100644 index 0000000..92f315a Binary files /dev/null and b/Profiling_Page_3.png differ diff --git a/Profiling_Page_4.png b/Profiling_Page_4.png new file mode 100644 index 0000000..8fe0b54 Binary files /dev/null and b/Profiling_Page_4.png differ diff --git a/Profiling_Page_5.png b/Profiling_Page_5.png new file mode 100644 index 0000000..53d8c7f Binary files /dev/null and b/Profiling_Page_5.png differ diff --git a/README.md b/README.md index b71c458..875a34d 100644 --- a/README.md +++ b/README.md @@ -3,11 +3,68 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Ethan Brooks +* Tested on: Windows 7, Intel(R) Xeon(R), GeForce GTX 1070 8GB (SIG Lab) -### (TODO: Your README) +For this project we compare several implementations of two fundamental GPU algorithms: -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) +- `scan`: similar to `fold` in functional programming. `scan` takes an input array and a binary operator and returns an array where each element is the reduction of the preceding elements in the input array using the binary operator. For example, if the binary operator is addition, as it was in our implementation, each element in the output array is the sum of all preceding elements in the input array. +- `stream-compact`: equivalent to `filter` in functional programming. `stream-compact` takes an input array and a test function and returns a shortened version of the input array containing only elements that pass the test function. In our implementations, we use an implicit test function that passes only if a number is not equal to zero. In other words, our functions filters out all zeros from the input array. + +## Scan +We implemented three versions of `scan`: + +- CPU scan: this implementation does not use the GPU. Instead it simply iterates through the input array on the CPU accumulating sums and writing them to the output array as it goes. + +- Naive scan: this implementation iteratively adds elements at different strides: on the first iteration, the algorithm adds all elements that are directly adjacent; On the second iteration, the algorithm adds elements that are separated by a stride of two; on the third, elements are added at a stride of four. Strides continue to double until only one addition is performed. The following picture depicts this process: +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/parallel%20scan.png) +Since the process terminates when the doubling strides exceed the length of the input array, the total number of iterations is O(log n), where n is the number of elements in the input array. Since each thread performs a single addition, the time complexity of the algorithm as a whole is O(log n), _assuming that there are O(n) threads_. + +- Efficient scan: Naive scan is perfectly effective when there are as many threads as elements in the input array. However, this is not the case for larger arrays and instead threads must round-robin through waiting kernels. In this case, it is advantageous to minimize the number of threads launched at once. Naive scan actually performs _O(n log n)_ addition operations and consequently launches a total of _O(n log n)_ threads. Since the CPU version performs only _n_ addition operations, this suggests that we can do better. Efficient scan uses a clever upsweep/downsweep approach that achieves the logarithmic time complexity of the naive version but also performs only one addition per element in the array. + +- Thrust scan: this is an implementation from Thrust, a C++ library of GPU algorithms. + +The following diagram compares performance between these implementations: +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/Profiling_Page_2.png) + +Note that the x-axis in this diagram is logarithmic (that is, at each tick along the x-axis, the number of elements doubles). + +There are a few curiosities about these results. First, we observe that efficient scan, naive scan, and Thrust scan, all appear to be running with linear time complexity. This is the case because the GeForce GTX 1070 only has 1920 CUDA cores. In any GPU, the number of threads is constant and does not increase with the size of the inputs. However in more powerful GPUs, this can still lead to substantial performance increases by parallelizing a large fraction of the operations and dividing the total execution time by a large constant number. However as the number elements in the array increases relative to the number of threads executing in parallel, this improvement becomes less evident. In fact, the constant time improvement may be offset by the shortcomings of the GPU, namely its lack of optimizations for sequential operations (e.g. pipelining). + +This explains why the performance of the GPU implementations is comparable to (and in many cases worse than) the performance of the CPU implementation. It still does not explain the fact that naive scan outperforms all the other GPU algorithms, and efficient scan performs worst of all. One possible reason for this is that efficient scan reduces the total number of addition operations by O(log n). However, it requires the operation to be split into an upsweep and a downsweep, the latter having to wait for the former to complete. At the end of both these operations very few cores are active -- in fact, at the very end, only one core is active since approximately only one addition operation is performed on the last iteration. This is a problem for the naive implementation as well but the naive implementation only encounters this situation once, whereas the efficient implementation encounters this situation twice. Consequently the naive version actually has higher hardware saturation though it is also doing more work overall. + +Another likely explanation is that the efficient implementation performs almost twice as many memory accesses per kernel invocation -- it performs three in the upsweep kernel and four in the downsweep whereas the naive implementation performs only two memory accesses per kernel invocation. Since the GPU can only perform a limited number of simultaneous memory accesses, this might be a bottleneck that hinders performance on the efficient implementation. + +## Stream Compaction +We also implemented three versions of `stream-compact`. + +- efficient stream-compact: this version takes three steps: + + 1. We call a kernel to populate a new array of booleans that determines whether an element passes our test function (in our case, whether an element is nonzero). + 2. We perform `scan` on the array of booleans. We use the efficient version of `scan` described in the previous section. + 3. The output of `scan` actually corresponds to indices where the nonzero elements of the input array should be assigned. We use this information to assign elements of the input array to the output array in parallel. + +- compact with scan: in order to emulate the GPU version, we implement a version on the CPU that uses the CPU scan algorithm. As the diagram below demonstrates, this has no performance benefit on the CPU. + +- compact without scan: this final implementation is a straightforward CPU implementation that simply iterates through the input array and assigns all nonzero elements to the output array. + +The following diagram compares performance across these implementations. Again, the x-axis is logarithmic. + +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/Profiling_Page_3.png). +In this case, we observe that the GPU implementation does outperform the CPU implementations. However, as discussed in the previous section, this cannot be credited to the `scan` operation, and must have more to do with steps 1. and 3. -- the parallel testing and assignment of each element in the input array to the output array. + +## Large numbers bug +A strange bug that remains unresolved in this project is that our GPU algorithms simply would not run for arrays larger than 2^16. The following picture displays the result of running an array with 2^17 elements: +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/BugPNG.PNG) +This appears to be a compiler error, since the program encounters no errors for any of the smaller arrays that we tested. One unfortunate consequence of this issue is that it was impossible to compare our GPU implementations with the CPU algorithms on very large arrays, where the GPU may have indeed had the advantage. We were able to compare the Thrust implementation of scan with the CPU version at sizes as large as 2^29. The following graph depicts the results: +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/Profiling_Page_1.png) +It is interesting to note that the CPU _still_ outperforms even the optimized Thrust implementation for the GPU. Also, the Thrust implementation causes the same bug that we described earlier for arrays larger than 2^29. Again we reason that the poor performance of the GPU can be credited to the poor throughput of the GeForce GTX 1070. + +## blockSize optimizations +On both the naive and efficient implementations, we experimented with different block sizes and record the results in the charts below: + +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/Profiling_Page_4.png) +![alt text] (https://github.com/lobachevzky/Project2-Stream-Compaction/blob/master/Profiling_Page_5.png) + +Because of these experiments, we ran all of the earlier experiments using block sizes of 256. diff --git a/parallel scan.png b/parallel scan.png new file mode 100644 index 0000000..03f3412 Binary files /dev/null and b/parallel scan.png differ diff --git a/src/main.cpp b/src/main.cpp index 675da35..16bb2dc 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -14,7 +15,7 @@ #include "testing_helpers.hpp" int main(int argc, char* argv[]) { - const int SIZE = 1 << 8; + const int SIZE = 1 << 15; const int NPOT = SIZE - 3; int a[SIZE], b[SIZE], c[SIZE]; diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index fe872d4..acdf28e 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -1,39 +1,68 @@ #include "common.h" -void checkCUDAErrorFn(const char *msg, const char *file, int line) { - cudaError_t err = cudaGetLastError(); - if (cudaSuccess == err) { - return; - } - - fprintf(stderr, "CUDA error"); - if (file) { - fprintf(stderr, " (%s:%d)", file, line); - } - fprintf(stderr, ": %s: %s\n", msg, cudaGetErrorString(err)); - exit(EXIT_FAILURE); +int getNumBlocks(int blockSize, int n) { + return (n + blockSize - 1) / blockSize; } - -namespace StreamCompaction { -namespace Common { - -/** - * Maps an array to an array of 0s and 1s for stream compaction. Elements - * which map to 0 will be removed, and elements which map to 1 will be kept. - */ -__global__ void kernMapToBoolean(int n, int *bools, const int *idata) { - // TODO +void printArray(int n, const int *a, bool abridged) { + printf(" [ "); + for (int i = 0; i < n; i++) { + if (abridged && i + 2 == 15 && n > 16) { + i = n - 2; + printf("... "); + } + printf("%3d ", a[i]); + } + printf("]\n"); } -/** - * Performs scatter on an array. That is, for each element in idata, - * if bools[idx] == 1, it copies idata[idx] to odata[indices[idx]]. - */ -__global__ void kernScatter(int n, int *odata, - const int *idata, const int *bools, const int *indices) { - // TODO + +void checkCUDAErrorFn(const char *msg, const char *file, int line) { + cudaError_t err = cudaGetLastError(); + if (cudaSuccess == err) { + return; + } + + fprintf(stderr, "CUDA error"); + if (file) { + fprintf(stderr, " (%s:%d)", file, line); + } + fprintf(stderr, ": %s: %s\n", msg, cudaGetErrorString(err)); + exit(EXIT_FAILURE); } +__device__ int threadIndex(){ + return (blockIdx.x * blockDim.x) + threadIdx.x; } + + +namespace StreamCompaction { + namespace Common { + + /** + * Maps an array to an array of 0s and 1s for stream compaction. Elements + * which map to 0 will be removed, and elements which map to 1 will be kept. + */ + __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { + int index = threadIndex(); + if (index >= n) return; + + bools[index] = idata[index] != 0; + } + + /** + * Performs scatter on an array. That is, for each element in idata, + * if bools[idx] == 1, it copies idata[idx] to odata[indices[idx]]. + */ + __global__ void kernScatter(int n, int *odata, + const int *idata, const int *bools, const int *indices) { + int index = threadIndex(); + if (index >= n) return; + + if (bools[index] == 1) { + odata[indices[index]] = idata[index]; + } + } + + } } diff --git a/stream_compaction/common.h b/stream_compaction/common.h index 4f52663..9c61a65 100644 --- a/stream_compaction/common.h +++ b/stream_compaction/common.h @@ -7,29 +7,35 @@ #define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__) #define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__) +void printArray(int n, const int *a, bool abridged = false); + +int getNumBlocks(int blockSize, int n); + /** - * Check for CUDA errors; print and exit if there was a problem. - */ +* Check for CUDA errors; print and exit if there was a problem. +*/ void checkCUDAErrorFn(const char *msg, const char *file = NULL, int line = -1); inline int ilog2(int x) { - int lg = 0; - while (x >>= 1) { - ++lg; - } - return lg; + int lg = 0; + while (x >>= 1) { + ++lg; + } + return lg; } inline int ilog2ceil(int x) { - return ilog2(x - 1) + 1; + return ilog2(x - 1) + 1; } +__device__ int threadIindex(); namespace StreamCompaction { -namespace Common { - __global__ void kernMapToBoolean(int n, int *bools, const int *idata); + namespace Common { + __global__ void kernMapToBoolean(int n, int *bools, const int *idata); - __global__ void kernScatter(int n, int *odata, - const int *idata, const int *bools, const int *indices); -} + __global__ void kernScatter(int n, int *odata, + const int *idata, const int *bools, const int *indices); + + } } diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index e600c29..1acdfd3 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -1,36 +1,65 @@ #include #include "cpu.h" +#include "common.h" +#include +#include +#include namespace StreamCompaction { -namespace CPU { - -/** - * CPU scan (prefix sum). - */ -void scan(int n, int *odata, const int *idata) { - // TODO - printf("TODO\n"); -} + namespace CPU { -/** - * CPU stream compaction without using the scan function. - * - * @returns the number of elements remaining after compaction. - */ -int compactWithoutScan(int n, int *odata, const int *idata) { - // TODO - return -1; -} + /** + * CPU scan (prefix sum). + */ + void scan(int n, int *odata, const int *idata) { + // TODO + int valToWrite = 0; + int sumOfPrev2; + for (int i = 0; i < n; i++) { + sumOfPrev2 = valToWrite + idata[i]; + odata[i] = valToWrite; + valToWrite = sumOfPrev2; + } + } -/** - * CPU stream compaction using scan and scatter, like the parallel version. - * - * @returns the number of elements remaining after compaction. - */ -int compactWithScan(int n, int *odata, const int *idata) { - // TODO - return -1; -} + /** + * CPU stream compaction without using the scan function. + * + * @returns the number of elements remaining after compaction. + */ + int compactWithoutScan(int n, int *odata, const int *idata) { + // TODO + int j = 0; + for (int i = 0; i < n; i++) { + if (idata[i] != 0) { + odata[j++] = idata[i]; + } + } + return j; + } -} + /** + * CPU stream compaction using scan and scatter, like the parallel version. + * + * @returns the number of elements remaining after compaction. + */ + int compactWithScan(int n, int *odata, const int *idata) { + // TODO + + for (int i = 0; i < n; i++) { + odata[i] = idata[i] != 0; + } + + scan(n, odata, odata); + + + int retVal = odata[n - 1] + (idata[n - 1] != 0); + for (int i = 0; i < n; i++) { + if (idata[i] != 0) { + odata[odata[i]] = idata[i]; + } + } + return retVal; + } + } } diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index b2f739b..97fc09f 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -2,33 +2,192 @@ #include #include "common.h" #include "efficient.h" +#include "stdio.h" +#include "stdlib.h" + +#define blockSize 256 +#define checkCUDAErrorWithLine(msg) checkCUDAError(msg, __LINE__) namespace StreamCompaction { -namespace Efficient { + namespace Efficient { -// TODO: __global__ + __device__ int threadIndex() { + return (blockIdx.x * blockDim.x) + threadIdx.x; + } -/** - * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ -void scan(int n, int *odata, const int *idata) { - // TODO - printf("TODO\n"); -} + __global__ void kernUpSweep(int n, int d, int *odata, int *idata) { + int index = threadIndex(); + if (index >= n) return; + int addTerm = (index + 1) % (d * 2) == 0 ? idata[index - d] : 0; + odata[index] = idata[index] + addTerm; + } -/** - * Performs stream compaction on idata, storing the result into odata. - * All zeroes are discarded. - * - * @param n The number of elements in idata. - * @param odata The array into which to store elements. - * @param idata The array of elements to compact. - * @returns The number of elements remaining after compaction. - */ -int compact(int n, int *odata, const int *idata) { - // TODO - return -1; -} + __global__ void kernDownSweep(int length, int d, int *odata, int *idata) { + int index = threadIndex(); + if (index >= length) return; -} + // On the first iteration, and using only one thread, set the last element to 0. + if ((index + 1) % d == 0) { + int swapIndex = index - (d / 2); + int term = (length == d) && (index == d - 1) ? 0 : idata[index]; + odata[index] = term + idata[swapIndex]; + odata[swapIndex] = term; + } + } + + int bufferToPow2(int n) { + return pow(2, ceil(log2(n))); // n rounded up to the nearest power of 2 + } + + void dev_scan(int n, int *dev_odata, int *dev_idata) { + + int bufferedLength = bufferToPow2(n); + int numBlocks = getNumBlocks(blockSize, n); // enough blocks to allocate one thread to each array element + + // upsweep + for (int d = 1; d <= n; d *= 2) { + kernUpSweep << > >(n, d, dev_odata, dev_idata); + + // swap dev_idata with dev_odata + int *swap = dev_idata; + dev_idata = dev_odata; + dev_odata = swap; + } + + // downsweep + for (int d = bufferedLength; d >= 1; d /= 2) { + kernDownSweep << > >(bufferedLength, d, dev_odata, dev_idata); + + // swap dev_idata with dev_odata + int *swap = dev_idata; + dev_idata = dev_odata; + dev_odata = swap; + } + } + + + /** + * Performs prefix-sum (aka scan) on idata, storing the result into odata. + */ + void scan(int n, int *odata, const int *idata) { + + // declare arrays + int* dev_idata; + int* dev_odata; + + int bufferedLength = bufferToPow2(n); + + // allocate memory + cudaMalloc((void**)&dev_idata, bufferedLength * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_idata failed!"); + cudaMalloc((void**)&dev_odata, bufferedLength * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_odata failed!"); + + // copy memory and run the algorithm + cudaMemcpy(dev_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + + dev_scan(n, dev_odata, dev_idata); + + cudaMemcpy(odata, dev_idata, n* sizeof(int), cudaMemcpyDeviceToHost); + + cudaFree(dev_idata); + cudaFree(dev_odata); + } + + /** + * Performs stream compaction on idata, storing the result into odata. + * All zeroes are discarded. + * + * @param n The number of elements in idata. + * @param odata The array into which to store elements. + * @param idata The array of elements to compact. + * @returns The number of elements remaining after compaction. + */ + int compact(int n, int *odata, const int *idata) { + // declare arrays + int* dev_idata; + int* dev_odata; + int* dev_bools; + int* dev_pingPong; + int* dev_indices; + int* bools = (int*)calloc(n, sizeof(int)); + int* indices = (int*)calloc(n, sizeof(int)); + int* pingPong = (int*)calloc(n, sizeof(int)); + + //cudaEvent_t start, stop; + //cudaEventCreate(&start); + //cudaEventCreate(&stop); + + //cudaEventRecord(start); + //saxpy << <(N + 255) / 256, 256 >> >(N, 2.0f, d_x, d_y); + //cudaEventRecord(stop); + + //cudaEventSynchronize(stop);6 + //float milliseconds = 0; + //cudaEventElapsedTime(&milliseconds, start, stop); + + // allocate memory + cudaMalloc((void**)&dev_idata, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_idata failed!"); + cudaMalloc((void**)&dev_bools, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_bools failed!"); + cudaMalloc((void**)&dev_pingPong, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_pingPong failed!"); + cudaMalloc((void**)&dev_indices, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_indices failed!"); + cudaMalloc((void**)&dev_odata, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_odata failed!"); + + // copy input data to device + cudaMemcpy(dev_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + + //////////// + cudaEvent_t start, stop; + cudaEventCreate(&start); + cudaEventCreate(&stop); + cudaEventRecord(start); + //////////////// + + // enough blocks to allocate one thread to each array element + int numBlocks = (n / blockSize) + 1; + + // get array of booleans determining whether + Common::kernMapToBoolean << > > (n, dev_bools, dev_idata); + cudaMemcpy(dev_pingPong, dev_bools, n * sizeof(int), cudaMemcpyDeviceToDevice); + + // allocate memory and run scan + dev_scan(n, dev_indices, dev_pingPong); + + Common::kernScatter << > > (n, dev_odata, dev_idata, dev_bools, dev_indices); + + /////////// + cudaEventRecord(stop); + cudaEventSynchronize(stop); + float milliseconds = 0; + cudaEventElapsedTime(&milliseconds, start, stop); + printf("%f\n", milliseconds); + cudaEventDestroy(start); + cudaEventDestroy(stop); + ///////// + + + // copy from device + cudaMemcpy(indices, dev_indices, n * sizeof(int), cudaMemcpyDeviceToHost); + cudaMemcpy(bools, dev_bools, n * sizeof(int), cudaMemcpyDeviceToHost); + int newLength = indices[n - 1] + bools[n - 1]; // return value + cudaMemcpy(odata, dev_odata, newLength * sizeof(int), cudaMemcpyDeviceToHost); + + // free memory + cudaFree(dev_idata); + cudaFree(dev_odata); + cudaFree(dev_bools); + cudaFree(dev_indices); + free(indices); + free(bools); + free(pingPong); + + return newLength; + } + + } } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 3d86b60..65577a4 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -3,18 +3,79 @@ #include "common.h" #include "naive.h" +#define blockSize 256 +#define checkCUDAErrorWithLine(msg) checkCUDAError(msg, __LINE__) + + namespace StreamCompaction { -namespace Naive { + namespace Naive { -// TODO: __global__ + __device__ int threadIndex() { + return (blockIdx.x * blockDim.x) + threadIdx.x; + } -/** - * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ -void scan(int n, int *odata, const int *idata) { - // TODO - printf("TODO\n"); -} -} + __global__ void kernAdd(int d, int n, int *odata, int *idata) { + int index = threadIndex(); + if (index >= n) return; + odata[index] = (index < d ? 0 : idata[index - d]) + idata[index]; + } + + __global__ void kernShiftRight(int n, int *odata, int *idata) { + int index = threadIndex(); + if (index == 0) odata[0] = 0; + if (index >= n) return; + odata[index] = idata[index - 1]; + } + + /** + * Performs prefix-sum (aka scan) on idata, storing the result into odata. + */ + void scan(int n, int *odata, const int *idata) { + // TODO + int* dev_idata; + int* dev_odata; + + cudaMalloc((void**)&dev_idata, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_idata failed!"); + cudaMalloc((void**)&dev_odata, n * sizeof(int)); + checkCUDAErrorWithLine("cudaMalloc dev_odata failed!"); + + cudaMemcpy(dev_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + + //////////// + cudaEvent_t start, stop; + cudaEventCreate(&start); + cudaEventCreate(&stop); + cudaEventRecord(start); + //////////////// + + int numBlocks = getNumBlocks(blockSize, n); + for (int d = 1; d < n * 2; d *= 2) { + kernAdd << > >(d, n, dev_odata, dev_idata); + + int *swap = dev_idata; + dev_idata = dev_odata; + dev_odata = swap; + } + kernShiftRight << > >(n, dev_odata, dev_idata); + + /////////// + cudaEventRecord(stop); + cudaEventSynchronize(stop); + float milliseconds = 0; + cudaEventElapsedTime(&milliseconds, start, stop); + printf("%f\n", milliseconds); + cudaEventDestroy(start); + cudaEventDestroy(stop); + ///////// + + cudaMemcpy(odata, dev_odata, n* sizeof(int), cudaMemcpyDeviceToHost); + odata[0] = 0; + + cudaFree(dev_idata); + cudaFree(dev_odata); + } + + } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index d8dbb32..91c36b5 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -6,17 +6,20 @@ #include "common.h" #include "thrust.h" +#define checkCUDAErrorWithLine(msg) checkCUDAError(msg, __LINE__) + namespace StreamCompaction { -namespace Thrust { + namespace Thrust { -/** - * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ -void scan(int n, int *odata, const int *idata) { - // TODO use `thrust::exclusive_scan` - // example: for device_vectors dv_in and dv_out: - // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); -} + /** + * Performs prefix-sum (aka scan) on idata, storing the result into odata. + */ + void scan(int n, int *odata, const int *idata) { + // TODO use `thrust::exclusive_scan` -} + thrust::exclusive_scan(idata, idata + n, odata); + + } + + } }