Although Nvidia hasn’t done much to the design of its GPU architecture recently - other than adding some more stream processors and renaming some of its older GPUs - there’s little doubt that the original GeForce 8-series architecture was groundbreaking stuff. How do you follow up something like that? Well, according to the rumour mill, Nvidia has similarly radical ideas in store for its upcoming GT300 architecture.
Bright Side of News claims to have harvested
“information confirmed from multiple sources” about the part, which looks as though it could be set to take on any threat posed by Intel’s forthcoming Larrabee graphics processor. Unlike today’s traditional GPUs, which are based on a SIMD (single instruction, multiple data) architecture, the site reports that GT300 will rely on
“MIMD-similar functions” where
“all the units work in MPMD mode”.
MIMD stands for multiple-input, multiple-data, and it’s a technology often found in SMP systems and clusters. Meanwhile, MPMD stands for multiple-program, multiple data. An MIMD system such as this would enable you to run an independent program on each of the GPU’s parallel processors, rather than having the whole lot running the same program. Put simply, this could open up the possibilities of parallel computing on GPUs even further, particularly when it comes to GPGPU apps.
Computing expert Greg Pfister, who’s worked in parallel computing for 30 years, has a good blog about the differences between MIMD and SIMD architectures
here, which is well worth a read if you want to find out more information. Pfister makes the case that a major difference between Intel’s Larrabee and an Nvidia GPU running CUDA is that the former will use a MIMD architecture, while the latter uses a SIMD architecture.
“Pure graphics processing isn’t the end point of all of this,” says Pfister. He gives the example of game physics, saying
“maybe my head just isn't build for SIMD; I don't understand how it can possibly work well [on SIMD]. But that may just be me.”
Pfister says there are pros and cons to both approaches.
“For a given technology,” says Pfister,
“SIMD always has the advantage in raw peak operations per second. After all, it mainly consists of as many adders, floating-point units, shaders, or what have you, as you can pack into a given area.” However, he adds that
“engineers who have never programmed don’t understand why SIMD isn’t absolutely the cat’s pajamas.”
He points out that SIMD also has its problems.
“There’s the problem of batching all those operations,” says Pfister.
“If you really have only one ADD to do, on just two values, and you really have to do it before you do a batch (like, it’s testing for whether you should do the whole batch), then you’re slowed to the speed of one single unit. This is not good. Average speeds get really screwed up when you average with a zero. Also not good is the basic need to batch everything. My own experience in writing a ton of APL, a language where everything is a vector or matrix, is that a whole lot of APL code is written that is basically serial: One thing is done at a time.” As such, Pfister says that
“Larrabee should have a big advantage in flexibility, and also familiarity. You can write code for it just like SMP code, in C++ or whatever your favorite language is.”
Bright Side of News points out that this could potentially put the GPU’s parallel processing units
“almost on equal terms” with the
“FPUs inside latest AMD and Intel CPUs.” In terms of numbers, the site claims that the top-end GT300 part will feature 16 groups that will each contain 32 parallel processing units, making for a total of 512. The side also claims that the GPU’s scratch cache will be
“much more granular” which will enable a greater degree of
“interactivity between the cores inside the cluster”.
No information on clock speeds has been revealed yet, but if this is true, it looks as though Nvidia’s forthcoming GT300 GPU will really offer something new to the GPU industry. Are you excited about the prospect of an MIMD- based GPU architecture with 512 parallel processing units, and could this help Nvidia to take on the threat from Intel’s Larrabee graphics chip? Let us know your thoughts in the forums.
Want to comment? Please log in.