ATI Radeon HD 4890 vs. NVIDIA GeForce GTX 275
by Anand Lal Shimpi & Derek Wilson on April 2, 2009 12:00 AM EST- Posted in
- GPUs
Mirror’s Edge: Do we have a winner?
And now we get to the final test. Something truly different: Mirror’s Edge.
This is an EA game. Ben had to leave before we got to this part of the test, he does have a wife and kid after all, so I went at this one alone.
I’d never played Mirror’s Edge. I’d seen the videos, it looked interesting. You play as a girl, Faith, a runner. You run across rooftops, through buildings, it’s all very parkour-like. You’re often being pursued by “blues”, police offers, as you run through the game. I won’t give away any plot details here but this game, I liked.
The GPU accelerated PhysX impacted things like how glass shatters and the presence of destructible cloth. We posted a video of what the game looks like with NVIDIA GPU accelerated PhysX enabled late last year:
"Here is the side by side video showing better what DICE has added to Mirror's Edge for the PC with PhysX. Please note that the makers of the video (not us) slowed down the game during some effects to better show them off. The slow downs are not performance related issues. Also, the video is best viewed in full screen mode (the button in the bottom right corner)."
In Derek’s blog about the game he said the following:
“We still want to really get our hands on the game to see if it feels worth it, but from this video, we can at least say that there is more positive visual impact in Mirror's Edge than any major title that has used PhysX to date. NVIDIA is really trying to get developers to build something compelling out of PhysX, and Mirror's Edge has potential. We are anxious to see if the follow through is there.”
Well, we have had our hands on the game and I’ve played it quite a bit. I started with PhysX enabled. I was looking for the SSD-effect. I wanted to play with it on then take it away and see if I missed it. I played through the first couple of chapters with PhysX enabled, fell in lust with the game and then turned off PhysX.
I missed it.
I actually missed it. What did it for me was the way the glass shattered. When I was being pursued by blues and they were firing at me as I ran through a hallway full of windows, the hardware accelerated PhysX version was more believable. I felt more like I was in a movie than in a video game. Don’t get me wrong, it wasn’t hyper realistic, but the effect was noticeable.
I replayed a couple of chapters and then played some new ones with PhysX disabled now before turning it back on and repeating the test.
The impact of GPU accelerated PhysX was noticeable. EA had done it right.
The Verdict?
So am I sold? Would I gladly choose a slower NVIDIA part because of PhysX support? Of course not.
The reason why I enjoyed GPU accelerated PhysX in Mirror’s Edge was because it’s a good game to begin with. The implementation is subtle, but it augments an already visually interesting title. It makes the gameplay experience slightly more engrossing.
It’s a nice bonus if I already own a NVIDIA GPU, it’s not a reason for buying one.
The fact of the matter is that Mirror’s Edge should be the bare minimum requirement for GPU accelerated PhysX in games. The game has to be good to begin with and the effects should be the cherry on top. Crappy titles and gimmicky physics aren’t going to convince anyone. Aggressive marketing on top of that is merely going to push people like us to call GPU accelerated PhysX out for what it is. I can’t even call the overall implementations I’ve seen in games half baked, the oven isn’t even preheated yet. Mirror’s Edge so far is an outlier. You can pick a string of cheese off of a casserole and like it, but without some serious time in the oven it’s not going to be a good meal.
Then there’s the OpenCL argument. NVIDIA won’t port PhysX to OpenCL, at least not anytime soon. But Havok is being ported to OpenCL, that means by the end of this year all games that use OpenCL Havok can use GPU accelerated physics on any OpenCL compliant video card (NVIDIA, ATI and Intel when Larrabee comes out).
While I do believe that NVIDIA and EA were on to something with the implementation of PhysX in Mirror’s Edge, I do not believe NVIDIA is strong enough to drive the entire market on its own. Cross platform APIs like OpenCL will be the future of GPU accelerated physics, they have to be, simply because NVIDIA isn’t the only game in town. The majority of PhysX titles aren’t accelerated on NVIDIA GPUs, I would suspect that it won’t take too long for OpenCL accelerated Havok titles to equal that number once it’s ready.
Until we get a standard for GPU accelerated physics that all GPU vendors can use or until NVIDIA can somehow convince every major game developer to include compelling features that will only be accelerated on NVIDIA hardware, hardware PhysX will be nothing more than fancy lettering on a cake.
You wanted us to look at PhysX in a review of an ATI GPU, and there you have it.
294 Comments
View All Comments
7Enigma - Thursday, April 2, 2009 - link
Deja vu again, and again, and again. I've posted in no less than 3 other articles how bad some of the conclusions have been. There is NO possible way you could conclude the 275 is the better card at anything other than the 30" display resolution. Not only that, but it appears with the latest Nvidia drivers they are making things worse.Honestly, does anyone else see the parallel between the original OCZ SSD firmware and these new Nvidia drivers? Seems like they were willing to sacrifice 99% of their customers for the 1% that have 30" displays (which probably wouldn't even be looking at the $250 price point). Nvidia, take a note from OCZ's situation; lower performance at 30" to give better performance at 22-24" resolutions would do you much better in the $250 price segment. You shot yourselves in the foot on this one...
Gary Key - Thursday, April 2, 2009 - link
The conclusion has been clarified to reflect the resolution results. It falls right into line with your thoughts and others as well as our original thoughts that did not make it through the edits correctly.7Enigma - Thursday, April 2, 2009 - link
Yup, I responded to Anand's post with a thank you. We readers just like to argue, and when something doesn't make sense, we're quick to go on the attack. But also quick to understand and appreciate a correction.duploxxx - Thursday, April 2, 2009 - link
Just some thoughts:There is only 1 single benchmark out of 7 where the 275 has better frame rates for 1680 and 1920 resolution against the 4890 and yet your final words are that you favor the 275???? Only in 2560 the 275 is clearly the better choice. Are you already in the year 2012 where 2560 might be the standard resolution of the sales, it is only very recent that the 1680 became standard and even then this resolution is high for global OEM market sales. Your 2560 is not even few % of the market.
I think you have to clarify your final words a bit more with your choice.... Perhaps if we see power consumption, fan noice etc that would be added value to the choice, but for now, TWIMTBP is really not enough push to prefer the card, I am sure the red team will improve there drivers as usual also.
anything else i missed in your review that could counter my thoughts?
SiliconDoc - Monday, April 6, 2009 - link
Derek has been caught in the 2560 wins it all no matter what with the months on end of ati taking that cake since the 4870 releasse. No lower resolutions mattered for squat since the ati lost there - so you'll have to excuse his months long brainwashing.Thankfully anand checked in and smacked it out of his review just in time for the red fanboy to start enjoying lower resolution wins while nvidia takes the high resolution crown, which is- well.. not a win here anymore.
Congratulations, red roosters.
duploxxx - Thursday, April 2, 2009 - link
just as addon, I also checked some other reviews (yes i always read anandtech first as main source of info) and i saw that it is cooler then a 4870 and actually consumes 10% less then a 4870 so this can't be the reason either while the 275 stays at the same 280 power consumption. Also OC parts are already shown GPU above 1000....cyriene - Thursday, April 2, 2009 - link
I would have liked to see some information on heat output and the temperatures of the cards while gaming.Otherwise, nice article.
7Enigma - Thursday, April 2, 2009 - link
This is an extreme omission. The fact that the 4890 is essentially an overclocked 4870 means with virtually nothing changed you HAVE to show the temps. I still stick by my earlier comment that the Vapo-chill model of the Sapphire 4870 is possibly a better card since it's temps are significantly lower than the stock 4870, while already being overclocked. I could easily imagine that for $50-60 less you could have the performance of the 4890 at cooler temps (by OC'ing the vapochill further).Comon guys, you have to give thought to this!
SiliconDoc - Monday, April 6, 2009 - link
Umm, they - you know the AT bosses, don't like the implications of that. So many months, even years, spent on screeching like women about nvidia rebranding has them in a very difficult position.Besides, they have to keep the illusion of superior red power useage, so only after demand will they put up the power chart.
They tried to get away with not, but they couldn't do it.
initialised - Thursday, April 2, 2009 - link
GPU-z lists the R790 as having a surface area of 282mm2 while the R770 has 256mm2 but both are listed as having the same transistor count.