Monday 15 June 2015

bigdata - Is there any way to process the huge bunch of float data as keeping the double precision at CUDA device? -


There is a large group of data waiting on the CUDA device to be processed with machine learning algorithm. Although I have some concerns about the memory of the device, so I try to use float numbers instead of double (I think this is a good solution unless it does not point to any better). Is there any way to have double accuracy for results obtained from float number? I do not think so. Even if this is a little silly question then what is the other right way to handle large data instances on the device. No

No, there is no way to keep double in the result if you Process data as float . Handle it as a double if the memory size is a problem, the general approach means to handle chunks in the data. Copy a part in the GPU, start GPU processing, and processing is on, while copying more data to GPU, and copying some results back. This is a standard way to handle the "not fit" problems in the GPU memory size.

It is called overlap of copying and calculating, and you use the CUDA stream to complete it. There are several types of code in CUDA samples (like) that use the currents .

No comments:

Post a Comment