ATI Radeon HD 2900 XT: Calling a Spade a Spade
by Derek Wilson on May 14, 2007 12:04 PM EST- Posted in
- GPUs
Next Up: NVIDIA's G80
NVIDIA has been more tight-lipped about their underlying architecture, but we will infer as much as possible from the block diagrams we've seen and conversations we've had.
The G80 shader core is a little different from the R600. It is built on eight SIMD units each containing 16 SPs. The SIMD instructions are not VLIW, but single scalar instructions, and each SP within a SIMD unit executes that instruction on a different thread. While groups of 16 SPs share resources, NVIDIA's compiler doesn't need to build VLIW instructions to schedule out any of these SPs and it would be quite difficult to create dependencies between SPs because they are running different threads.
The bottom line here is that up to eight distinct shader operations are running across 128 threads at one time. This means we could have 128 threads all complete a scalar operation every clock, or we could have 128 threads all complete a 4-wide vector operation one component at a time over four clocks.
On NVIDIA hardware, vertex threads are assigned to SIMD units in blocks of 16, while geometry and pixel threads are assigned in blocks of 32 (16 threads over two clocks). With smaller blocks, we see better branch performance but worse cache or prefetch utilization than we would with a more coarsely grained approach.
This implementation also means that we don't have to worry about dependencies in the shader code. Of course, it is also the case that we can't extract parallelism from the shader code itself. But the advantage gives us a steady rate of 128 operations per clock. This can actually go up in some special cases, but it shouldn't go lower under normal circumstances.
Comparing Shader Architectures: R600 vs. G80
The key to the architecture comparison is to realize that nothing is straight up apples to apples here. We need to look at how much work can be done per clock, how much work is likely to be done per clock, and how much work we can get done per unit time.
First, G80 can process more threads in parallel: 128 as opposed to R600's 64. Performing work on more threads at a time is one very good way of extracting overall parallelism from the problem of graphics. There are millions of pixels in every frame that need to be processed, and if we had hardware large enough we could process them all at once.
However, more work (up to 5x) is potentially getting done on each of those 64 threads than on NVIDIA's 128 threads. This is because R600 can execute up to five parallel operations per thread while NVIDIA hardware is only able to handle one operation at a time per SP (in most cases). But maximizing throughput on the AMD hardware will be much more difficult, and we won't always see peak performance from real code. On the best case level, R600 is able to do 2.5x the work of G80 per clock (320 operations on R600 and 128 on G80). Worst case for code dependency on both architectures gives the G80 a 2x advantage over R600 per clock (64 operations on R600 with 128 on G80).
The real difference is in where parallelism is extracted. Both architectures make use of the fact that threads are independent of each other by using multiple SIMD units. While NVIDIA focused on maximizing parallelism in this area of graphics, AMD decided to try to extract parallelism inside the instruction stream by using a VLIW approach. AMD's average case will be different depending on the code running, though so many operations are vector based, high utilization can generally be expected.
However, even if we expect high utilization on AMD hardware, the fact remains that G80 has a large clock speed advantage. With the shader core on G80 pushed up to 1.5 GHz, we could still see some cases where R600 is faster, but the majority of the time G80 should be able to best R600 on a pure compute basis.
This overview still isn't the bottom line in performance. Efficient latency hiding, good scheduling, high cache utilization, high availability of texture data, good branching, and fast and efficient Z/stencil and color processing all contribute as well. Where possible, let's explore those areas a bit more.
86 Comments
View All Comments
imaheadcase - Tuesday, May 15, 2007 - link
Says who? Most people I know don't care to turn on AA since they visually can't see a difference. Only people who are picky about everything they see do normally, the majority of people don't notice "jaggies" since the brain fixes it for you when you play.
Roy2001 - Tuesday, May 15, 2007 - link
Says who? Most people I know don't care to turn on AA since they visually can't see a difference.------------------------------------------
Wow, I never turn it of once I am used to have AA. I cannot play games anymore without AA.
Amuro - Tuesday, May 15, 2007 - link
Says who? No one spent $400 on a video card would turn off AA.
SiliconDoc - Wednesday, July 8, 2009 - link
Boy we'd sure love to hear those red fans claiming they turn off AA nowadays and it doesn't matter.LOL
It's just amazing how thick it gets.
imaheadcase - Tuesday, May 15, 2007 - link
Sure they do, because its a small "tweak" with a performance hit. I say who spends $400 on a video card to remove "jaggies" when they are not noticeable in the first place to most people. Same reason most people don't go for SLI or Crossfire, because it really in the end offers nothing substantial for most people who play games.
Some might like it, but they would not miss it if they stopped using it for some time. Its not like its make or break feature of a video card.
motiv8 - Tuesday, May 15, 2007 - link
Depends on the game or player tbh.I play within ladders without AA turned on, but for games like oblivion I would use AA. Depends on your needs at the time.