AMD's Radeon HD 5870: Bringing About the Next Generation Of GPUs
by Ryan Smith on September 23, 2009 9:00 AM EST- Posted in
- GPUs
Sometimes a surprise is nice. Other times it’s nice for things to go as planned for once.
Compared to the HD 4800 series launch, AMD’s launch of the HD 5800 series today is going to fall into the latter category. There are no last-minute announcements or pricing games, or NDAs that get rolled back unexpectedly. Today’s launch is about as normal as a new GPU launch can get.
However with the lack of last-minute surprises, it becomes harder to keep things under wraps. When details of a product launch are announced well ahead of time, inevitably someone on the inside can’t help but leak the details of what’s going on. The result is that what we have to discuss today isn’t going to come as a great surprise for some of you.
As early as a week ago the top thread on our video forums had the complete and correct specifications for the HD 5800 series. So if you’ve been peaking at what’s coming down the pipe (naughty naughty) then much of this is going to be a confirmation of what you already know.
Today’s Launch
3 months ago AMD announced the Evergreen family of GPUs, AMD’s new line of DirectX11 based GPUs. 2 weeks ago we got our first briefing on the members of the Evergreen family, and AMD publically announced their Eyefinity technology running on the then-unnamed Radeon HD 5870. Today finally marks the start of the Evergreen launch, with cards based on the first chip, codename Cypress, being released. Out of Cypress comes two cards: The Radeon HD 5870, and the Radeon HD 5850.
ATI Radeon HD 5870 | ATI Radeon HD 5850 | ATI Radeon HD 4890 | ATI Radeon HD 4870 | |
Stream Processors | 1600 | 1440 | 800 | 800 |
Texture Units | 80 | 72 | 40 | 40 |
ROPs | 32 | 32 | 16 | 16 |
Core Clock | 850MHz | 725MHz | 850MHz | 750MHz |
Memory Clock | 1.2GHz (4.8GHz data rate) GDDR5 | 1GHz (4GHz data rate) GDDR5 | 975MHz (3900MHz data rate) GDDR5 | 900MHz (3600MHz data rate) GDDR5 |
Memory Bus Width | 256-bit | 256-bit | 256-bit | 256-bit |
Frame Buffer | 1GB | 1GB | 1GB | 1GB |
Transistor Count | 2.15B | 2.15B | 959M | 956M |
Manufacturing Process | TSMC 40nm | TSMC 40nm | TSMC 55nm | TSMC 55nm |
Price Point | $379 | $259 | ~$180 | ~$160 |
So what’s Cypress in a nutshell? It’s a RV790 (Radeon HD 4890) with virtually everything doubled, given the additional hardware needed to meet the DirectX 11 specifications, with new features such as Eyefinity and angle independent anisotropic filtering packed in, lower idle power usage, and fabricated on TSMC’s 40nm process. Beyond that Cypress is a direct evolution/refinement of the RV7xx, and closely resembles its ancestor in design and internal workings.
The leader of the Evergreen family is the Radeon HD 5870, which will be AMD’s new powerhouse card. The 5870 features 1600 stream processors divided among 20 SIMDs, 80 texture units, and 32 ROPs, with 1GB of GDDR5 on-board connected to a 256bit memory bus. The 5870 is clocked at 850MHz for the core clock, and 1.2GHz (4.8GHz effective) for the memory, giving it a maximum compute performance of 2.72 teraflops. Load power is 188W, and idle power is a tiny 27W. It is launching at a MSRP of $379.
Below that we have the 5850 (which we will not be reviewing today), which is a slightly cut-down version of the 5870. Here we have 1440 stream processors divided among 18 SIMDs, 72 texture units, and the same 32 ROPs, with the same 256bit memory bus. The 5850 is clocked at 725Mhz for the core, and 1Ghz for the memory, giving it a maximum compute performance of 2.09 TFLOPS. With the disabled units, load power is slightly reduced to 170W, and it has the same 27W idle power. AMD expects the 5850 to perform at approximately 80% the performance level of the 5870, and is pricing it at $259.
Availability is going to be an issue, so we may as well get the subject out of the way. While today is a hard launch, it’s not quite as hard of a launch as we would like to see. AMD is launching the 5800 series with Dell, so it shouldn't come as a surprise if Dell has cards when e-tailers don't.
The situation with general availability is murky at best. The first thing we heard was that there may be a week of lag, but as of today AMD is telling us that they expect e-tailers to have 5870 cards on the 23rd, and 5850 cards next week. In any case whatever cards do make it in the channel are going to be in short supply, which matches the overall vibe we’re getting from AMD that supplies are going to be tight initially compared to the demand. So even after the first few days it may be hard to get a card. Given a tight supply we’ll be surprised if prices stick to the MSRP, and we’re likely to see e-tailers charge a price premium in the first days. Depending on just how high the demand is, this may mean it’ll take a while for prices to fall down to their MSRPs and for AMD to completely clear the backlog of demand for these cards.
Update: As of 5am EDT, we have seen the availability of 5870s come and go. Newegg had some in stock, but they have since sold out. So indeed AMD did make the hard launch (which we're always glad to see), but it looks like our concerns about a limited supply are proving to be true.
Finally, we asked AMD about the current TSMC 40nm situation, and they have told us that they are happy with it. Our concern was that problems at TSMC (specifically: yield) would be a holdup in getting more cards out there, but this does not look to be the case. However given the low supply of the cards compared to where AMD expects the supply to be, TSMC’s total 40nm capacity may not be to AMD’s liking.
327 Comments
View All Comments
mapesdhs - Saturday, September 26, 2009 - link
MODel3 writes:
> 1.Geometry/vertex performance issues ...
> 2.Geometry/vertex shading performance issues ...
Would perhaps some of the subtests in 3DMark06 be able to test this?
(not sure about Vantage, never used that yet) Though given what Jarred
said about the bandwidth and other differences, I suppose it's possible
to observe large differences in synthetic tests which are not the real
cause of a performance disparity.
The trouble with heavy GE tests is, they often end up loading the fill
rates anyway. I've run into this problem with the SGI tests I've done
over the years:
http://www.sgidepot.co.uk/sgi.html">http://www.sgidepot.co.uk/sgi.html
The larger landscape models used in the Inventor tests are a good
example. The points models worked better in this regard for testing
GE speed (stars3/star4), but I don't know to what extent modern PC
gfx is designed to handle points modelling - probably works better
on pro cards. Actually, Inventor wasn't a good choice anyway as it's
badly CPU-bound and API-heavy (I should have used Performer, gives
results 5 to 10X faster).
Anyway, point is, synthetic tests might allow one to infer that one
aspect of the gfx pipeline is a bottleneck when infact it isn't.
Ages ago I emailed NVIDIA (Ujesh, who I used to know many moons ago,
but alas he didn't reply) asking when, if ever, they would add
performance counters and other feedback monitors to their gfx
products so that applications could tell what was going on in the
gfx pipeline. SGI did this ages years ago, which allowed systems like
IR to support impressive functions such as Dynamic Video Resizing by
being able to monitor frame by frame what was going on within the gfx
engine at each stage. Try loading any 3D model into perfly, press F1
and click on 'Gfx' in the panel (Linux systems can run Performer), eg.:
http://www.sgidepot.co.uk/misc/perfly.gif">http://www.sgidepot.co.uk/misc/perfly.gif
Given how complex modern PC gfx has become, it's always been a
mystery to me why such functions haven't been included long ago.
Indeed, for all that Crysis looks amazing, I was never that keen on
it being used as a benchmark since there was no way of knowing
whether the performance hammering it created was due to a genuinely
complex environment or just an inefficient gfx engine. There's still
no way to be sure.
If we knew what was happening inside the gfx system, we could easily
work out why performance differences for different apps/games crop
up the way they do. And I would have thought that feedback monitors
within the gfx pipe would be even more useful to those using
professional applications, just as it was for coders working on SGI
hardware in years past.
Come to think of it, how do NVIDIA/ATI even design these things
without being able to monitor what's going on? Jarred, have you ever
asked either company about this?
Ian.
JarredWalton - Saturday, September 26, 2009 - link
I haven't personally, since I'm not really the GPU reviewer here. I'd assume most of their design comes from modeling what's happening, and with knowledge of their architecture they probably have utilities that help them debug stuff and figure out where stalls and bottlenecks are occurring. Or maybe they don't? I figure we don't really have this sort of detail for CPUs either, because we have tools that know the pipeline and architecture and they can model how the software performs without any hardware feedback.MODEL3 - Thursday, October 1, 2009 - link
I checked the web for synthetic geometry tests.Sadly i only found 3dMark Vantage tests.
You can't tell much from them, but they are indicative.
Check:
http://www.pcper.com/article.php?aid=783&type=...">http://www.pcper.com/article.php?aid=783&type=...
GPU Cloth: 5870 is only 1,2X faster than 4890. (vertex/geometry shading test)
GPU Particles: 5870 is only 1,2X faster than 4890. (vertex/geometry shading test)
Perlin Noise: 5870 is 2,5X faster than 4890. (Math-heavy Pixel Shader test)
Parallax Occlusion Mapping: 5870 is 2,1X faster than 4890. (Complex Pixel Shader test)
All the above 4 tests are not bandwidth limited at all.
Just for example, if you check:
http://www.pcper.com/article.php?aid=674&type=...">http://www.pcper.com/article.php?aid=674&type=...
You will see that a 750MHz 4870 512MB is 20-23% faster than a 625MHz 4850 in all the above 4 tests, so the extra bandwidth (115,2GB/s vs 64GB/s) it doesn't help at all.
But 4850 is extremely bandwidth limited in the color fillrate test (4870 is 60% faster than 4850)
Also it shouldn't be a problem of the dual rasterizer/dual SIMDs engine efficiency since synthetic Pixel Shader tests is fine (more than 2X) while the synthetic geometry shading tests is only 1,2X.
My guess is ATI didn't improve the classic geometry set-up engine and the GS because they want to promote vertex/geometry techniques based on the DX11 tesselator from now on.
Zool - Friday, September 25, 2009 - link
In Dx11 the fixed tesselation units will do much finer geometry details for much less memmory space and on chip so i think there isnt a single problem with that. Also the compute shader need minimal memory bandwith and can utilize plenty of idle shaders. The card is designed with dx11 in mind and it isnt using the wholle pipeline after all. I wouldnt make too early conclusions.(I think the perfomance will be much better after few drivers)MODEL3 - Saturday, September 26, 2009 - link
The DX11 tesselator in order to be utilized must the game engine to take advantage of it.I am not talking about the tesselator.
I am talking about the classic Geometry unit (DX9/DX10 engines) and the Geometry Shader [GS] (DX10 engines only).
I'll check to see if i can find a tech site that has synthetic bench for Geometry related perf. and i will post again tomorrow, if i can find anything.
JarredWalton - Friday, September 25, 2009 - link
It's worth noting that when you factor in clock speeds, compared to the 5870 the 4870X2 offers 88% of the core performance and 50% more bandwidth. Some algorithms/games require more bandwidth and others need more core performance, but it's usually a combination of the two. The X2 also has CrossFire inefficiencies to deal with.More interesting perhaps is that the GTX 295 offers (by my estimates, which admittedly are off in some areas) roughly 10% more GPU shader performance, about 18.5% more fill rate, and 46% more bandwidth than the HD 5870. The fact that the HD 4870 is still competitive is a good sign that ATI is getting good use of their 5 SPs per Stream Processor design, and that they are not memory bandwidth limited -- at least not entirely.
SiliconDoc - Wednesday, September 30, 2009 - link
The 4870x2 has somewhere around "double the data paths" in and out of it's 2 cpu's. So what you have with the 5870 putting as some have characterized " 2x 770 cores melded into one " is DOUBLE THE BOTTLENECK in and out of the core.They tried to compensate with ddr5 1200/4800 - but the fact remains, they only get so much with that "NOT ENOUGH DATA PATHS/PINS in and out of that gpu core."
cactusdog - Friday, September 25, 2009 - link
Omg these cards look great. Lol Silicon Doc is so gutted and furious he is making hmself look like a dam fool again only this time he should be on suicide watch...Nvidia cards are now obsolete..LOL.mapesdhs - Friday, September 25, 2009 - link
Hehe, indeed. Have you ever seen a scifi series called, "They Came
From Somewhere Else?" S.D.'s getting so worked up, reminds me of
the scene where the guy's head explodes. :D
Hmm, that's an alternative approach I suppose in place of post
moderation. Just get someone so worked up about something they'll
have an aneurism and pop their clogs... in which case, I'll hand
it back to Jarred. *grin*
Ian.
SiliconDoc - Friday, September 25, 2009 - link
That is quite all right, you fellas make sure to read it all, I am more than happy that the truth is sinking into your gourds, you won't be able to shake it.I am very happy about it.