Friday, May 15, 2009

NVIDIA Releases CUDA Toolkit 2.2

                             
                       NVIDIA has released version 2.2 of the CUDA Toolkit and SDK for GPU computing. The release includes support for Windows 7, the upcoming OS from Microsoft that embraces GPU computing. CUDA Toolkit 2.2 features Visual Profiler, improved OpenGL Interop, Zero-copy, Hardware Debugger for the GPU, etc.
                                   
                      The CUDA Visual Profiler is a graphical tool that enables the profiling of C applications running on the GPU. This latest release of the CUDA Visual Profiler includes metrics for memory transactions, giving developers visibility into one of the most important areas they can tune to get better performance. CUDA Toolkit 2.2 delivers improved performance for medical imaging and other OpenGL applications running on Quadro GPUs when computing with CUDA and rendering OpenGL graphics functions are performed on different GPUs. The release also delivers up to 2x bandwidth savings for video processing applications.

                     Enables streaming media, video transcoding, image processing and signal processing applications to realise performance improvements by allowing CUDA functions to read and write directly from pinned system memory. This reduces the frequency and amount of data copied back and forth between GPU and CPU memory. Supported on MCP7x and GT200 and later GPUs. 

                     Developers can now use a hardware level debugger on CUDA-enabled GPUs that offers the simplicity of the popular open-source GDB debugger yet enables a developer to easily debug a programme that is running 1000s of threads on the GPU, claimed the company. This CUDA GDB debugger for Linux has all the features required to debug directly on the GPU, including the ability to set breakpoints, watch variables, inspect state, etc.

                     This system configuration option allows an application to get exclusive use of a GPU, guaranteeing that 100 per cent of the processing power and memory of the GPU will be dedicated to that application. Multiple applications can still be run concurrently on the system, but only one application can make use of each GPU at a time. This configuration is particularly useful on Tesla cluster systems where large applications may require dedicated use of one or more GPUs on each node of a Linux cluster, said the company.

No comments:

Post a Comment