ATI Radeon HD 4890 vs. NVIDIA GeForce GTX 275
by Anand Lal Shimpi & Derek Wilson on April 2, 2009 12:00 AM EST- Posted in
- GPUs
The Widespread Support Fallacy
NVIDIA acquired Ageia, they were the guys who wanted to sell you another card to put in your system to accelerate game physics - the PPU. That idea didn’t go over too well. For starters, no one wanted another *PU in their machine. And secondly, there were no compelling titles that required it. At best we saw mediocre games with mildly interesting physics support, or decent games with uninteresting physics enhancements.
Ageia’s true strength wasn’t in its PPU chip design, many companies could do that. What Ageia did that was quite smart was it acquired an up and coming game physics API, polished it up, and gave it away for free to developers. The physics engine was called PhysX.
Developers can use PhysX, for free, in their games. There are no strings attached, no licensing fees, nothing. Now if the developer wants support, there are fees of course but it’s a great way of cutting down development costs. The physics engine in a game is responsible for all modeling of newtonian forces within the game; the engine determines how objects collide, how gravity works, etc...
If developers wanted to, they could enable PPU accelerated physics in their games and do some cool effects. Very few developers wanted to because there was no real install base of Ageia cards and Ageia wasn’t large enough to convince the major players to do anything.
PhysX, being free, was of course widely adopted. When NVIDIA purchased Ageia what they really bought was the PhysX business.
NVIDIA continued offering PhysX for free, but it killed off the PPU business. Instead, NVIDIA worked to port PhysX to CUDA so that it could run on its GPUs. The same catch 22 from before existed: developers didn’t have to include GPU accelerated physics and most don’t because they don’t like alienating their non-NVIDIA users. It’s all about hitting the largest audience and not everyone can run GPU accelerated PhysX, so most developers don’t use that aspect of the engine.
Then we have NVIDIA publishing slides like this:
Indeed, PhysX is one of the world’s most popular physics APIs - but that does not mean that developers choose to accelerate PhysX on the GPU. Most don’t. The next slide paints a clearer picture:
These are the biggest titles NVIDIA has with GPU accelerated PhysX support today. That’s 12 titles, three of which are big ones, most of the rest, well, I won’t go there.
A free physics API is great, and all indicators point to PhysX being liked by developers.
The next several slides in NVIDIA’s presentation go into detail about how GPU accelerated PhysX is used in these titles and how poorly ATI performs when GPU accelerated PhysX is enabled (because ATI can’t run CUDA code on its GPUs, the GPU-friendly code must run on the CPU instead).
We normally hold manufacturers accountable to their performance claims, well it was about time we did something about these other claims - shall we?
Our goal was simple: we wanted to know if GPU accelerated PhysX effects in these titles was useful. And if it was, would it be enough to make us pick a NVIDIA GPU over an ATI one if the ATI GPU was faster.
To accomplish this I had to bring in an outsider. Someone who hadn’t been subjected to the same NVIDIA marketing that Derek and I had. I wanted someone impartial.
Meet Ben:
I met Ben in middle school and we’ve been friends ever since. He’s a gamer of the truest form. He generally just wants to come over to my office and game while I work. The relationship is rarely harmful; I have access to lots of hardware (both PC and console) and games, and he likes to play them. He plays while I work and isn't very distracting (except when he's hungry).
These past few weeks I’ve been far too busy for even Ben’s quiet gaming in the office. First there were SSDs, then GDC and then this article. But when I needed someone to play a bunch of games and tell me if he noticed GPU accelerated PhysX, Ben was the right guy for the job.
I grabbed a Dell Studio XPS I’d been working on for a while. It’s a good little system, the first sub-$1000 Core i7 machine in fact ($799 gets you a Core i7-920 and 3GB of memory). It performs similarly to my Core i7 testbeds so if you’re looking to jump on the i7 bandwagon but don’t feel like building a machine, the Dell is an alternative.
I also setup its bigger brother, the Studio XPS 435. Personally I prefer this machine, it’s larger than the regular Studio XPS, albeit more expensive. The larger chassis makes working inside the case and upgrading the graphics card a bit more pleasant.
My machine of choice, I couldn't let Ben have the faster computer.
Both of these systems shipped with ATI graphics, obviously that wasn’t going to work. I decided to pick midrange cards to work with: a GeForce GTS 250 and a GeForce GTX 260.
294 Comments
View All Comments
7Enigma - Thursday, April 2, 2009 - link
Deja vu again, and again, and again. I've posted in no less than 3 other articles how bad some of the conclusions have been. There is NO possible way you could conclude the 275 is the better card at anything other than the 30" display resolution. Not only that, but it appears with the latest Nvidia drivers they are making things worse.Honestly, does anyone else see the parallel between the original OCZ SSD firmware and these new Nvidia drivers? Seems like they were willing to sacrifice 99% of their customers for the 1% that have 30" displays (which probably wouldn't even be looking at the $250 price point). Nvidia, take a note from OCZ's situation; lower performance at 30" to give better performance at 22-24" resolutions would do you much better in the $250 price segment. You shot yourselves in the foot on this one...
Gary Key - Thursday, April 2, 2009 - link
The conclusion has been clarified to reflect the resolution results. It falls right into line with your thoughts and others as well as our original thoughts that did not make it through the edits correctly.7Enigma - Thursday, April 2, 2009 - link
Yup, I responded to Anand's post with a thank you. We readers just like to argue, and when something doesn't make sense, we're quick to go on the attack. But also quick to understand and appreciate a correction.duploxxx - Thursday, April 2, 2009 - link
Just some thoughts:There is only 1 single benchmark out of 7 where the 275 has better frame rates for 1680 and 1920 resolution against the 4890 and yet your final words are that you favor the 275???? Only in 2560 the 275 is clearly the better choice. Are you already in the year 2012 where 2560 might be the standard resolution of the sales, it is only very recent that the 1680 became standard and even then this resolution is high for global OEM market sales. Your 2560 is not even few % of the market.
I think you have to clarify your final words a bit more with your choice.... Perhaps if we see power consumption, fan noice etc that would be added value to the choice, but for now, TWIMTBP is really not enough push to prefer the card, I am sure the red team will improve there drivers as usual also.
anything else i missed in your review that could counter my thoughts?
SiliconDoc - Monday, April 6, 2009 - link
Derek has been caught in the 2560 wins it all no matter what with the months on end of ati taking that cake since the 4870 releasse. No lower resolutions mattered for squat since the ati lost there - so you'll have to excuse his months long brainwashing.Thankfully anand checked in and smacked it out of his review just in time for the red fanboy to start enjoying lower resolution wins while nvidia takes the high resolution crown, which is- well.. not a win here anymore.
Congratulations, red roosters.
duploxxx - Thursday, April 2, 2009 - link
just as addon, I also checked some other reviews (yes i always read anandtech first as main source of info) and i saw that it is cooler then a 4870 and actually consumes 10% less then a 4870 so this can't be the reason either while the 275 stays at the same 280 power consumption. Also OC parts are already shown GPU above 1000....cyriene - Thursday, April 2, 2009 - link
I would have liked to see some information on heat output and the temperatures of the cards while gaming.Otherwise, nice article.
7Enigma - Thursday, April 2, 2009 - link
This is an extreme omission. The fact that the 4890 is essentially an overclocked 4870 means with virtually nothing changed you HAVE to show the temps. I still stick by my earlier comment that the Vapo-chill model of the Sapphire 4870 is possibly a better card since it's temps are significantly lower than the stock 4870, while already being overclocked. I could easily imagine that for $50-60 less you could have the performance of the 4890 at cooler temps (by OC'ing the vapochill further).Comon guys, you have to give thought to this!
SiliconDoc - Monday, April 6, 2009 - link
Umm, they - you know the AT bosses, don't like the implications of that. So many months, even years, spent on screeching like women about nvidia rebranding has them in a very difficult position.Besides, they have to keep the illusion of superior red power useage, so only after demand will they put up the power chart.
They tried to get away with not, but they couldn't do it.
initialised - Thursday, April 2, 2009 - link
GPU-z lists the R790 as having a surface area of 282mm2 while the R770 has 256mm2 but both are listed as having the same transistor count.