CUDA threads for inner loop

Posted by Manolete on Stack Overflow See other posts from Stack Overflow or by Manolete
Published on 2012-10-10T09:28:45Z Indexed on 2012/10/10 9:37 UTC
Read the original article Hit count: 124

Filed under:
|
|
|

I've got this kernel

__global__ void kernel1(int keep, int include, int width, int* d_Xco, 
              int* d_Xnum, bool* d_Xvalid, float* d_Xblas)
{

  int i, k;  
  i = threadIdx.x + blockIdx.x * blockDim.x;

  if(i < keep){

    for(k = 0; k < include ; k++){

      int val = (d_Xblas[i*include + k] >= 1e5);
      int aux = d_Xnum[i];

      d_Xblas[i*include + k] *= (!val);
      d_Xco[i*width + aux] = k;
      d_Xnum[i] +=val;
      d_Xvalid[i*include + k] = (!val);
    }
  }
}

launched with

int keep = 9000;
int include = 23000;
int width = 0.2*include;

int threads = 192;
int blocks = keep+threads-1/threads;
kernel1 <<< blocks,threads  >>>( keep, include, width,
                                 d_Xco, d_Xnum, d_Xvalid, d_Xblas );

This kernel1 works fine but it is obviously not totally optimized. I thought it would be straight forward to eliminate the inner loop k but for some reason it doesn't work fine. My first idea was:

__global__ void kernel2(int keep, int include, int width, 
               int* d_Xco, int* d_Xnum, bool* d_Xvalid, 
               float* d_Xblas)
{

  int i, k;  
  i = threadIdx.x + blockIdx.x * blockDim.x;
  k = threadIdx.y + blockIdx.y * blockDim.y;

  if((i < keep)  && (k < include) ) {

      int val = (d_Xblas[i*include + k] >= 1e5);
      int aux = d_Xnum[i];
      d_Xblas[i*include + k] *= (float)(!val);
      d_Xco[i*width + aux] = k;
      atomicAdd(&d_Xnum[i], val);
      d_Xvalid[i*include + k] = (!val);
  }
}

launched with a 2D grid:

int keep = 9000;
int include = 23000;
int width = 0.2*include;

int th = 32;
dim3 threads(th,th);
dim3 blocks (keep+threads.x-1/threads.x, include+threads.y-1/threads.y);
kernel2 <<< blocks,threads >>>( keep, include, width, d_Xco, d_Xnum, 
                               d_Xvalid, d_Xblas );

Although I believe the idea is fine, it does not work and I am running out of ideas here. Could you please help me out here? I also think the problem could be in d_Xco which stores the position k in a smaller array , so the order matters, but I can't think of any other way of doing it...

© Stack Overflow or respective owner

Related posts about cuda

Related posts about gpu