- ‹ Prev
- Next ›
GP-GPU & CUDA Updates
The Great GTC Meet - Where Disciplines Converge
The GPU Technology Conference hosted by NVIDIA is the perfect breeding ground for ideas, collaborations, enrichment and much more with all the like-minded folks coming together to share their experiences and findings of the world of GPU computing . The ground is practically teeming with GPU programmers, graphics designers, research heads, solution providers and more. This also means NVIDIA's top staff is available around the clock for the duration of this show to facilitate, assist and take in feedback from participants for their future progression. For us, this is an added bonus as we get to pick on various product managers and experts from NVIDIA on their progress in various verticals. We managed to get in touch with high flying executives like Jen-Hsun Huang, CEO of NVIDIA, Bill Dally, Chief Scientist (VP of Research), Andy Keane, GM of GPU Computing, Matt Wuebbling, Senior Product Manager for Notebooks and Tegra, and many other figureheads. Since NVIDIA's involvement in the industry reaches several verticals, we'll share with you the concise version of our findings for easy reading.
With the first GTC held in 2009, the GPU computing scene has achieved 'lift off' status with several experimental aspects. This year, Jen-Hsun Huang believes the industry would have reached 'escape velocity' as GP-GPU enabled programs go in to production. Next year, the market would probably have experienced the outcome of GP-GPU programs, build case studies, share success stories and drive even greater momentum for GPU computing.
Additional Notes on the CUDA Momentum
We've covered a bunch of new updates and the uptake of CUDA in a recent article, so here's a few more points to add on from our discussions with the executives:-
- The recent PGI CUDA C (CUDA-x86) wasn't meant to get multi-core CPUs to match the performance of a many-core GPU. Most apps do not scale linearly with the number of cores. The problem is with coherency and there's not enough memory bandwidth.
- Even if the scalability isn't perfect. If CUDA programs can run on a 1000-core cluster, and can achieve a speed-up over a non-CUDA version, then there's still much to benefit from.
- The most important thing is that CUDA apps can now run everywhere on the widely available x86 platform. And so increasing CUDA apps' usefulness.
- For many general users, CUDA has yet to make a notable impact. Jen-Hsun believes Image Processing is a key area where CUDA can make a difference as it is the most important consumer application that can benefit from parallel computing architecture.
- As shown by the Adobe demo (pictured below and with video), computational photography (not digital photography) is the future of photography. For NVIDIA, this is an important area of interest and one that they would invest a great deal in the future.
- Last but not least, our media roundtable session had an interesting question if NVIDIA should open a CUDA app store to encourage more developers to get into the game? NVIDIA thought it was an interesting idea which they could probably venture into later. For now there haven't been plans for such.
- ‹ Prev
- Next ›