ATI's X1000 Series: Extended Performance Testing
by Derek Wilson on October 7, 2005 10:15 AM EST- Posted in
- GPUs
The Chronicles of Riddick Performance
We don't have as much data as we would like for The Chronicles of Riddick as it looks like the game itself checks the capabilities of our monitor before allowing a setting to be run. While the monitor that we use to test games at high resolutions is capable of running at up to 2048x1536, the data it reports to graphics cards is unreliable. NVIDIA and ATI allow for the ability to specify manually the limits of one's monitor, but in this case, the game goes straight to the flawed source. In any case, all these tests were run with the shader level set to 2.0.
With the data that we have, the high end NVIDIA cards absolutely dominate this benchmark. The 6800 GT even performs as well as the X1800 XT. The X850 XT keeps up with the X1800 XL, and the only surprise is that the X1600 XT performs the same as the 6600 GT. Once again, the playability of the X1300 Pro is limited to 1024x768.
We don't have as much data as we would like for The Chronicles of Riddick as it looks like the game itself checks the capabilities of our monitor before allowing a setting to be run. While the monitor that we use to test games at high resolutions is capable of running at up to 2048x1536, the data it reports to graphics cards is unreliable. NVIDIA and ATI allow for the ability to specify manually the limits of one's monitor, but in this case, the game goes straight to the flawed source. In any case, all these tests were run with the shader level set to 2.0.
With the data that we have, the high end NVIDIA cards absolutely dominate this benchmark. The 6800 GT even performs as well as the X1800 XT. The X850 XT keeps up with the X1800 XL, and the only surprise is that the X1600 XT performs the same as the 6600 GT. Once again, the playability of the X1300 Pro is limited to 1024x768.
93 Comments
View All Comments
DonPMitchell - Tuesday, October 11, 2005 - link
I wish they had benchmarked Half Life2, which is sort of a standard thing to look at. I don't disagree with their final conclusions, but I can't help but wonder if HL2 spanked nVidia, and that made them chose another game. It would be more unbiased for them to stick to a more or less standard set of tests, and it would let use compare to prior tests as well.Larso - Monday, October 10, 2005 - link
First I would like to thank the AT crew for another excellent article! The graphs are very intuitive and easy to read.I think that one aspect is missing in about every of the X1000 articles I have found on the net: how does this new family of cards compare with the previous generation(s)? - For example, what middlerange card should I pickup as an upgrade for 9600XT, and how much improvement will I get??
Anybody stumbled over a review that compares these new cards with the previous generations??
-
Another question, I selected the 9600XT back then because it can easily be passively cooled, now I wonder if any of the new cards can be silenced without turning the case into a bake owen?
MrJim - Sunday, October 9, 2005 - link
I dont understand why anandtech keeps on not testing games other than shooters with these high-end cards, like demanding flightsims(lock-on is very graphic intensive) and/or Pacific Fighters(not as GPU dependant as Lock-On). Also why no racing sims? We who play these do use filtering and FSAA alot or at least the 350 ones i know. Cheers!dimitrio - Saturday, October 8, 2005 - link
Some one said they like the way the graphs were done. I must say that I found it a little difficult to do comparisions, since you have to look back and forth to see what card each symbol represents. After you give up figuring that out, you try to look at the data below, but again, since it's not ordered "better to worse", it takes some time to figure out that data. Things got even more complicated with the "Lower is Better" graph.I aknowledge that it makes everything much cleaner, and with the number of benchmarks on this article, you would end up with dozens of graphs on several pages, and you can clearly sence the writers desire to improve the presentation with this new format, but sometimes things are better kept simple, and I still would like to see the many bar graphs, as they are much more intuitive and informative, to me at least.
photoguy99 - Saturday, October 8, 2005 - link
This article is a prime reason - editors listen, participate, improve.Tom's doesn't even link their articles directly to discussions! Why? Can't handle the feedback?
Glad to be here.
Spoelie - Saturday, October 8, 2005 - link
Isn't anyone else confuzzled about the x1600xt lackluster showing? I was really hoping to make it my next upgrade but it's current performance is only value-card worth. Just looking at the specs (12 SM3.0 pixel pipes @ 600mhz) would have it creaming the 6600GT (8 SM3.0 pixel pipes @ 500mhz), but it's barely even competing with it. This without considering other architectural advances and faster memory! My guess is that the ultimate fillrate is determined by the TMU's, and only having 4 makes this card a worthy 9600xt followup -which had 4 TMU's @ 500mhz- but nowhere near the mainstream card of this day and age. Extremely bad decision on ATI's part if this is it. I can't think of any other reason for this card to perform so pathetic. It would be nice to have it clarified if I'm completely missing the issue tho.taylore2003 - Saturday, October 8, 2005 - link
what Anandtech really need to to is benchmark the x1300pro on a non fx-55 system, ppl who buy that gpu will not have a top of the line PC, do it on a 3200+ amd not a fx-55, i mean come on, then ppl like me (damn all 16 y/o's) can see what kinda framefates we would be getting, the x1300pro should go vs the 6600 non gt! b/c we can see te x1800 is a great top of the line but ati's mid to low range gpus are not so hot, so lets see, a x1300pro vs a 6600 non gt! w/ a reasonable test setup.coldpower27 - Saturday, October 8, 2005 - link
From what I can see, it seems the 4 TMU's is a very crippling feature when compared to the 6600 GT, as that has 8 TMU's but 4 ROP's. It beats the 6600 GT yes, but not as much as we would be expecting.Compared to the 6600 GT
Pure Pixel Fillrate: X1600 XT 7.08 GP vs 6600 GT 4.0GP 77% More (ATI)
Output Pixel Fillrate: X1600 XT (4 TMU's)2.36 GP vs (4 ROP's) 2.0 GP 18% More (ATI)
Vertex Shader Fillrate: X1600 XT 737.5 MT's vs 6600 GT 375 MT's 96.6% More (ATI)
Memory Bandwidth: X1600 XT 1380MHZ 22.08 GB/s vs 6600 GT 1000MHZ 16 GB/s 38% More (ATI)
Add to that the 256MB vs 128MB comparison, and "more efficient Shader Model 3.0 implementation".
Battlefield with AA.
X1600 XT is ~ 50-60% Faster
Day of Defeat Source with AA
X1600 XT is ~ 33% Faster (Starting 12x9)
Doom 3 wtih AA(OpenGL)
X1600 XT is ~ 6600 GT
FarCry with AA
X1600 XT is ~ 37%-38% Faster (10x7 to 12x9)
Chronicles of Riddick No AA (OpenGL)
X1600 XT is ~ 6600 GT
Splinter Cell Chaos Theory with AA&AF
X1600 XT is ~ 12-18% Faster.
Everquest No AA
X1600 XT is 12-18% Faster.
In some cases it's just faster up to the difference in their Output Fillrates, Battlefield 2, FarCry & DOD:Source being it's biggest Wins, & the two OpenGL games as it's poorest showing.
Not much of this is a surprise however.
AtaStrumf - Saturday, October 8, 2005 - link
I was wondering something along those lines too, especialy why is X1800 XT so much faster than X1800 XL, and why is X1300 Pro not that much slower than X1600 XT?I don't get this new ring bus memory controler. Maybe it has something to do with that as well as TMUs and co. In the past we had 256-bit on the high end, 128-bit in the middle and 64-bit on the low end, now it seems as though all have _the same_ memory controler, which seem a bit odd to me, and what is also peculiar is the fact that all but the highest end X1800 XT have 256 MB of memory, while X1800 XT has 512 MB. Does more memory now somehow equal more "bits" -- bandwith?
haelduksf - Friday, October 7, 2005 - link
I'm guessing you're actually having a "peek" at DOD:S performance ;)