Calling handwritten CUDA kernel with thrust

Posted by macs on Stack Overflow See other posts from Stack Overflow or by macs
Published on 2010-03-07T21:28:34Z Indexed on 2010/03/08 1:35 UTC
Read the original article Hit count: 283

Filed under:
|

Hi,

since i needed to sort large arrays of numbers with CUDA, i came along with using thrust. So far, so good...but what when i want to call a "handwritten" kernel, having a thrust::host_vector containing the data?

My approach was (backcopy is missing):

int CUDA_CountAndAdd_Kernel(thrust::host_vector<float> *samples, thrust::host_vector<int> *counts, int n) {

 thrust::device_ptr<float> dSamples = thrust::device_malloc<float>(n);
 thrust::copy(samples->begin(), samples->end(), dSamples);

 thrust::device_ptr<int> dCounts = thrust::device_malloc<int>(n);
 thrust::copy(counts->begin(), counts->end(), dCounts);

 float *dSamples_raw = thrust::raw_pointer_cast(dSamples);
 int *dCounts_raw = thrust::raw_pointer_cast(dCounts);

 CUDA_CountAndAdd_Kernel<<<1, n>>>(dSamples_raw, dCounts_raw);

 thrust::device_free(dCounts);
 thrust::device_free(dSamples);
}

The kernel looks like:

__global__ void CUDA_CountAndAdd_Kernel_Device(float *samples, int *counts) 

But compilation fails with:

error: argument of type "float **" is incompatible with parameter of type "thrust::host_vector> *"

Huh?! I thought i was giving float and int raw-pointers? Or am i missing something?

© Stack Overflow or respective owner

Related posts about c++

Related posts about cuda