Unified memory is one of the bigger feature additions to CUDA 6
It's been a little more than two years since Nvidia decided to open source its CUDA platform, and with the latest release, Nvidia says it's made improvements to parallel programming, as well as made it faster and easier for developers to create next generation applications. Towards those goals, the CUDA 6 Release Candidate that's now available comes with several new features.
One of the biggest is unified memory support. Using CUDA 6, applications can access CPU and GPU memory without needing to manually copy data from one to the other.
"This is a major time saver that simplifies the programming process, and makes it easier for programmers to add GPU acceleration in a wider range of applications," Nvidia stated in a blog post announcing the release.
Other feature highlights include drop-in libraries that can automatically accelerate BLAS and FFTW calculations by replacing the existing CPU-only BLAS and FFTW library with a new GPU accelerated equivalent, and multi-GPU scaling. Nvidia also packed in a full suite of programming tools, GPU-accelerated math libraries, documentation, and programming guides.