ATI Radeon X800 Pro and XT Platinum Edition: R420 Arrives
by Derek Wilson on May 4, 2004 10:28 AM EST- Posted in
- GPUs
A New Compression Scheme: 3Dc
3Dc isn't something that's going to make current games run better or faster. We aren't talking about a glamorous technology; 3Dc is a lossy compression scheme for use in 3D applications (as its name is supposed to imply). Bandwidth is a highly prized commodity inside a GPU, and compression schemes exist to try to help alleviate pressure on the developer to limit the amount of data pushed through a graphics card.
There are already a few compressions schemes out there, but in their highest compression modes, they introduce some discontinuity into the texture. This is acceptable in some applications, but not all. The specific application ATI is initially targeting for use with 3Dc is normal mapping.
Normal mapping is used in making the lighting of a surface more detailed than is its geometry. Usually, the normal vector at any given point is interpolated from the normal data stored at the vertex level, but, in order to increase the detail of lighting and texturing effects on a surface, normal maps can be used to specify the way normal vectors should be oriented across an entire surface at a high level of detail. If very large normal maps are used, enormous amounts of lighting detail can produce the illusion of geometry that isn't actually there.
Here's an example of how normal mapping can add the appearance of more detailed geometry
In order to work with these large data sets, we would want to use a compression scheme. But since we don't want discontinuities in our lighting (which could appear as flashy or jumpy lighting on a surface), we would like a compression scheme that maintains the smoothness of the original normal map. Enter 3Dc.
This is an example of how 3Dc can help alieve continuity problems in normal map compression
In order to facilitate a high level of continuity, 3Dc divides textures into four by four blocks of vector4 data with 8 bits per component (512bit blocks). For normal map compression, we throw out the z component which can be calculated from the x and y components of the vector (all normal vectors in a normal map are unit vectors and fit the form x^2 + y^2 + z^2 = 1). After throwing out the unused 16 bits from each normal vector, we then calculate the minimum and maximum x and minimum and maximum y for the entire 4x4 block. These four values are stored, and each x or y value is stored as a 3 bit value selecting any of 8 equally spaced steps between the minimum and maximum x or y values (inclusive).
The storage space required for a 4x4 block of normal map data using 3Dc
the resulting compressed data is 4 vectors * 4 vectors * 2 components * 3 bits + 32 bits (128 bits) large, giving a 4:1 compression ratio for normal maps with no discontinuities. Any two channel or scalar data can be compressed fairly well via this scheme. When compressing data that is very noisy (or otherwise inherently discontinuous -- not that this is often seen) accuracy may suffer, and compression ratio falls off for data that is more than two components (other compression schemes may be more useful in these cases).
ATI would really like this compression scheme to catch on much as ST3C and DXTC have. Of course, the fact that compression and decompression of 3Dc is built in to R420 (and not NV40) won't play a small part in ATI's evangelism of the technology. After all is said and done, future hardware support by other vendors will be based on software adoption rate of the technology, and software adoption will likely also be influenced by hardware vendor's plans for future support.
As far as we are concerned, all methods of increasing apparent useable bandwidth inside a GPU in order to deliver higher quality games to end users are welcome. Until memory bandwidth surpasses the needs of graphics processors (which will never happen), innovative and effective compressions schemes will be very helpful in applying all the computational power available in modern GPUs to very large sets of data.
95 Comments
View All Comments
413xram - Wednesday, May 5, 2004 - link
They announced they where going to in there release anyway. Later on this summer. Why not now?jensend - Wednesday, May 5, 2004 - link
#61- nuts. 512 mb ram will pull loads more power, put out a lot more heat, cost a great deal more (especially now, since ram prices are sky-high), and give negligible if any performance gains. Heck, even 256 mb is still primarily a marketing gimmick.413xram - Wednesday, May 5, 2004 - link
They (ATI) are using the same technology that their previous cards are using. They pretty much just added more transistors to perform more functions at a higher speed. I am willing to bet my paycheck that they spent no where close to 400 million dollars to run neck and neck with nvidia in performance. I guess "virtually nothing" is an overstatement. My apologies.Phiro - Wednesday, May 5, 2004 - link
Where do you get your info that ATI spent "virtually nothing"?413xram - Wednesday, May 5, 2004 - link
Both cards perform brilliantly. They are truly a huge step in graphics processing. One problem I forsee though,is that Nvidia spent 400 million dollars into development of their new nv40 technology, while ATI spent virtually nothing to have the same performance gains. Economically that is a hard pill for Nvidia to swallow.It is true that Nvidia's card has the 3.0 pixel shading, unfortunatly though, they are banking on hardware that is not supported upon release of the card. In dealing with video cards from a consumers standpoint that is a hard sell. I have learned from the past that future possibilties of technology in hardware does nothing for me today. Not to mention the power supply issue that does not help neither.
Nvidia must find a way to get better performance out of their new card, I can't believe I'am saying that after seeing the specs that it already performs at, or it may be a long, HOT, and expensive summer for them.
P.S. Nvidia. A little advice. Speed up the release on your 512 mb card. That would definetly sell me. Overclocking your 6800 is something that 90% of us in this forum would do anyway.
theIrish1 - Wednesday, May 5, 2004 - link
heh, whatever.. whatever, and whatever. I love the fanboyisms....
I admit I am a fan of ATI cards. I bought a 9700pro and a 9500pro(in my secondary gaming rig) when they first came out, and an 8500 "pro" before that...but now I want to upgrade again. I am keeping an open mind. After looking at benchmarks, it is clear the both cards have their wins and losses depending on the test. I don't think there is a clear cut winner. nVidia got there by new innovation/technology. ATI got there by optimizing "older" technology.
At this point, with pricing being the same.. I think I still have to lean to the ATI cards. Main reasons being heat & power consumption. If the 6800U was $75 or $100 cheaper, I would probably go with that. It will be interesting to see where the 6850 falls benchmark wise, and also in pricing. If the 6850 takes the $500 pricepoint, where will that leave the 6800U? $450? Or with the 6850 be $550?
Something else about the x800Pro (which by the way, alot of the readers/posters seem to be getting confused as to what they are talking about between the Pro and XT models). Anyway, there are a few online stores out there taking pre-orders still for the x800PRO.... for $500+. I thought the Pro was going to go at $400 and the XT at $500...?!?
413xram - Wednesday, May 5, 2004 - link
Pumpkinierre - Wednesday, May 5, 2004 - link
On the fabrication o the two Gpus- the tech report:"Regardless, transistor counts are less important, in reality, than die size, and we can measure that. ATI's chips are manufactured by TSMC on a 0.13-micron, low-k "Black Diamond" process. The use of a low-capacitance dielectric can reduce crosstalk and allow a chip to run at higher speeds with less power consumption. NVIDIA's NV40, meanwhile, is manufactured by IBM on its 0.13-micron fab process, though without the benefit of a low-k dielectric."
The extra transistors of the 6800U might be taken up with the cinematic encoding/rendering embedded chip. Although ATI claim encoding in their X800p/XT blurb, I havent seen much yet to distinguish it from the 9800p in this field. The Tech report checked power consumption at the wall for their test systems and the 6800s ramp up the power a lot quicker with gpu speed so I'm not too hopeful about the overclock to 520Mhz and 6800u extreme gpu yields. Still, maybe a new stepping or 90nm SOI shrink might help (I noticed both manufacturers shied away from 90nm).
Anyway brilliant video cards from North America. Congratulations ATI and Nvidia!
NullSubroutine - Wednesday, May 5, 2004 - link
If it was nice sarcasm I can laugh, if it was nasty sarcasm you can back off. I can see it would be simple for me to overlook the map used, however no indication to what Atech used. One could assume or someone could ask for the real answer and if they are really lucky they will get a smart ass remark.After checking through 10 different reviews I found similar results to Atech when they had 25 bots, THG had none.
Next time save us both the hassle and just say THG didnt use bots, and Atech probably did.
TrogdorJW - Tuesday, May 4, 2004 - link
#54 - Think about things for a minute. Gee... I wonder why THG and AT got such different scores on UT2K4.... Might it be something like the selection of map and the demo used? Nah, that would be too simple. /sarcasmFrom THG: "For our tests in UT2004 we used our own timedemo on the map Assault-Torlan (no bots). All quality options are set to maximum."
No clear indication of what was used for the map or demo on AT, but I'm pretty sure that it was also a home-brewed demo, and likely on a different map and perhaps with a different number of players. Clearly, though, it was not the same demo as THG used... unless THG is in the habit of giving their benchmarking demos out? Didn't think so.
I see questions like this all the time. Unless two sites use the exact same settings, it's almost impossible to directly compare their scores. There is no conspiracy, though. Both sites pretty much say the same thing: close match, with the edge going to ATI right now, especially in DX9, while NV still reigns supreme in OGL.