Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
527 views
in Technique[技术] by (71.8m points)

CUDA program causes nvidia driver to crash

My monte carlo pi calculation CUDA program is causing my nvidia driver to crash when I exceed around 500 trials and 256 full blocks. It seems to be happening in the monteCarlo kernel function.Any help is appreciated.

#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
#include <curand.h>
#include <curand_kernel.h>


#define NUM_THREAD 256
#define NUM_BLOCK 256



///////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////

// Function to sum an array
__global__ void reduce0(float *g_odata) {
extern __shared__ int sdata[];

// each thread loads one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[tid] = g_odata[i];
__syncthreads();

// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) { // step = s x 2
    if (tid % (2*s) == 0) { // only threadIDs divisible by the step participate
        sdata[tid] += sdata[tid + s];
    }
    __syncthreads();
}

// write result for this block to global mem
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}

///////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
__global__ void monteCarlo(float *g_odata, int  trials, curandState *states){
//  unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    unsigned int incircle, k;
    float x, y, z;
    incircle = 0;

    curand_init(1234, i, 0, &states[i]);

    for(k = 0; k < trials; k++){
        x = curand_uniform(&states[i]);
        y = curand_uniform(&states[i]);
        z =(x*x + y*y);
        if (z <= 1.0f) incircle++;
    }
    __syncthreads();
    g_odata[i] = incircle;
}
///////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
int main() {

    float* solution = (float*)calloc(100, sizeof(float));
    float *sumDev, *sumHost, total;
    const char *error;
    int trials; 
    curandState *devStates;

    trials = 500;
    total = trials*NUM_THREAD*NUM_BLOCK;

    dim3 dimGrid(NUM_BLOCK,1,1); // Grid dimensions
    dim3 dimBlock(NUM_THREAD,1,1); // Block dimensions
    size_t size = NUM_BLOCK*NUM_THREAD*sizeof(float); //Array memory size
    sumHost = (float*)calloc(NUM_BLOCK*NUM_THREAD, sizeof(float));

    cudaMalloc((void **) &sumDev, size); // Allocate array on device
    error = cudaGetErrorString(cudaGetLastError());
    printf("%s
", error);


    cudaMalloc((void **) &devStates, (NUM_THREAD*NUM_BLOCK)*sizeof(curandState));
    error = cudaGetErrorString(cudaGetLastError());
    printf("%s
", error);


    // Do calculation on device by calling CUDA kernel
    monteCarlo <<<dimGrid, dimBlock>>> (sumDev, trials, devStates);
    error = cudaGetErrorString(cudaGetLastError());
    printf("%s
", error);

        // call reduction function to sum
    reduce0 <<<dimGrid, dimBlock, (NUM_THREAD*sizeof(float))>>> (sumDev);
    error = cudaGetErrorString(cudaGetLastError());
    printf("%s
", error);

    dim3 dimGrid1(1,1,1);
    dim3 dimBlock1(256,1,1);
    reduce0 <<<dimGrid1, dimBlock1, (NUM_THREAD*sizeof(float))>>> (sumDev);
    error = cudaGetErrorString(cudaGetLastError());
    printf("%s
", error);

    // Retrieve result from device and store it in host array
    cudaMemcpy(sumHost, sumDev, sizeof(float), cudaMemcpyDeviceToHost);
    error = cudaGetErrorString(cudaGetLastError());
    printf("%s
", error);


    *solution = 4*(sumHost[0]/total);
    printf("%.*f
", 1000, *solution);
    free (solution);
    free(sumHost);
    cudaFree(sumDev);
    cudaFree(devStates);
    //*solution = NULL;
    return 0;
}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

If smaller numbers of trials work correctly, and if you are running on MS Windows without the NVIDIA Tesla Compute Cluster (TCC) driver and/or the GPU you are using is attached to a display, then you are probably exceeding the operating system's "watchdog" timeout. If the kernel occupies the display device (or any GPU on Windows without TCC) for too long, the OS will kill the kernel so that the system does not become non-interactive.

The solution is to run on a non-display-attached GPU and if you are on Windows, use the TCC driver. Otherwise, you will need to reduce the number of trials in your kernel and run the kernel multiple times to compute the number of trials you need.

EDIT: According to the CUDA 4.0 curand docs(page 15, "Performance Notes"), you can improve performance by copying the state for a generator to local storage inside your kernel, then storing the state back (if you need it again) when you are finished:

curandState state = states[i];

for(k = 0; k < trials; k++){
    x = curand_uniform(&state);
    y = curand_uniform(&state);
    z =(x*x + y*y);
    if (z <= 1.0f) incircle++;
}

Next, it mentions that setup is expensive, and suggests that you move curand_init into a separate kernel. This may help keep the cost of your MC kernel down so you don't run up against the watchdog.

I recommend reading that section of the docs, there are several useful guidelines.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...