Overclocking Extravaganza: GTX 275's Complex Characteristics
by Derek Wilson on June 4, 2009 5:35 AM EST- Posted in
- GPUs
After our in depth look at overclocking with AMD's Radeon HD 4890, many of our readers wanted to see the same thing done with NVIDIA's GTX 275. We had planned on looking at both parts from the beginning, but we knew each review would take a bit of time and effort to design and put together. Our goal has been to try and design tests that would best show the particular overclocking characteristics of the different hardware, and shoehorning all that into one review would be difficult. Different approaches are needed to evaluate overclocking with AMD and NVIDIA hardware.
For our AMD tests, we only needed to worry about memory and core clock speed. This gave us some freedom to look at clock scaling in order to better understand the hardware. On the other hand, NVIDIA divides their GPU up a bit more and has another, higher speed, clock domain for shader hardware. Throwing another variable in there has a multiplicative impact on our testing, and we had a hard time deciding what tests really mattered. If we had simply used the same approach we did with the 4890 article, we would have ended up with way too much data to easily present or meaningfully analyze.
We've kept a few key test points, as we will look at each clock at the highest speed we could achieve on its own (all other clocks set at stock speeds). We will also look at performance with all clocks set to the maximum we could hit. Beyond this, rather than looking at how performance scales over clock speed with memory and shader at their maximum and looking at how performance scales over shader speed with memory and core at their maximum, we decided it would be cleaner to look at just one more configuration. For this test, we chose core and shader speed at maximum with memory at stock.
As with the previous look at overclocking, we present our analysis based on percent increases in performance but provide the raw data as well. It's all pretty straight forward with the raw data, and we do include our highly overclocked 4890 as well as the 900MHz core clocked 4890 that can be picked up pre-overclocked from the manufacturer. For the bulk of the article, we will just be considering the impact of overclocking on the GTX 275, but our conclusion will compare AMD and NVIDIA on overclocking in this segment.
The clock speeds we were able to pull out of our GTX 275 were not to shabby as far as overclocks go. Our core clock speed could have been better, but otherwise we did pretty well. Here is what we will be looking at today:
Core: 702MHz (vs. 633MHz stock)
Memory: 1296MHz (vs. 1134MHz stock)
Shader: 1656MHz (vs. 1404MHz stock)
These are 10.9, 14.3, and 17.9 percent increases respectively. First up, we'll look at the impact of overclocking the memory, then we'll move on to core and shader. After that it's on to fully overclocked and our core/shader combined overclock.
43 Comments
View All Comments
Hrel - Friday, June 5, 2009 - link
Wow, I guess the guys who programmed WAW and Race Driver did a REALLY crappy job at resource allocation; 30 percent compared to about 8 percent from Left 4 Dead; pretty terrible programming.MonsterSound - Friday, June 5, 2009 - link
I too like the 'change-in-place' resolution graphs, but have to agree that they would be better if the scale was consistent.As far as the 702mhz OC on your 275, that seems like a weak attempt. The retail evga 275 ftw model for example has been binned as an overclocker and stock speed is 713mhz. My MSI 275 FrozrOC is running at 735mhz right now. I can't seem to find mention of which models of the 275 you were testing with, but obviously not the fastest.
respectfully,...
Anonymous Freak - Thursday, June 4, 2009 - link
While I love the 'change-in-place' resolution graphs, they really need to be consistent. Leave games in the same location vertically; and keep the same scale horizontally. That way I can tell at an instant glance what the difference is. I don't like having the range switch from 0-15 to 0-7 to 0-10, plus changing the order of the games, when I click the different resolutions!After all, the only difference that matters on the graphs is the one the individual bars represent. So why go changing the other aspects? Yes, it's "pretty" to have the longest bar the same length, and to always have the graph sorted longest-on-top; but it makes the graph less readable.
For the few graphs that have a bunch of values clustered near each other, plus one or two outliers, just have the outliers run off the edge. For example, in most of your one-variable graphs, a range of 0-10% would be sufficient. Just make sure that for a given resolution set, the range is the same.
yacoub - Thursday, June 4, 2009 - link
This article completely kicks butt! It includes everything I'd want to see in charts, including both % gains and the actual FPS numbers versus other cards, and all with the three most important resolutions.Very, very good article. Please keep up this level of quality - the data and the depth really answer all the major questions readers and enthusiasts would have.
chizow - Thursday, June 4, 2009 - link
Nice job Derek, I've been lobbying for a comparison like this since G80 but nice to see a thorough comparison of the different clock domains and impact on performance.As I suggested in some of your multi-GPU round-up articles, it'd be nice to see similar using CPU clockspeed scaling with a few different types of CPU, say a single i7, a C2Q 9650 and a PII 955 for example, then test with a fast single GPU and observe performance difference at different clockspeeds.
It'd also be interesting to see some comparisons between different GPUs, say 260 to 275 to 280/285 at the same clockspeeds to measure the impact of actual physical differences between the GPU versions.
spunlex - Thursday, June 4, 2009 - link
It looks like a stock GTX 275 beats the 280 in almost every benchmark even at stock speed. Does anyone have any explanation as to why this is happening??I guess GTX 280 sales will be dropping quiet a bit now
PrinceGaz - Thursday, June 4, 2009 - link
This whole idea of the three seperate overclocks (core, shader, memory) being able to simultaneously provide almost their full percentage increase to any single result cannot possibly be right.Imagine you take the situation where a card is overclocked by 10% throughout (instead of 11%, 14%, 18% like you did). Core up 10%. Shaders up 10%. Memory up 10%. Going from your numbers, that would probably have given you about a 20% performance increase in two of the games! Do you really expect us to believe a graphics-card running 10% faster, can give a 20% performance boost to the overall framerate?
How does magically making Core and Shader seperate overclocks allow them to work together to nearly double their effect. If it worked that way, you could split the card up into twenty seperate individually overclockable parts, overclock them all by 10%, and end up with something giving over 3x the performance-- all from a 10% overclock :p
Something else must be happening in addition to what you are doing, and my first priority would be to check the actual speeds the card is running at using a third-party utility which reports not the speed the clocks have been set to, but the actual speed the hardware is running at (I believe RivaTuner does that in real-time in its hardware-monitor charts).
DerekWilson - Thursday, June 4, 2009 - link
I used rivatuner to check the clock speeds. i made very sure things were running at exactly the speeds I specified. At some clocks, the hardware would sort of "round" to the next available clock speed, but the clocks I chose all actually reflect what is going on in hardware.I do see what you are saying, but it doesn't work either the way you think it should or the way that you claim my logic would lead it be. Extrapolating the math I used (which I believe I made clear was not a useful judge of what to expect, but an extreme upper bound that is not achievable) is one thing, but that isn't what is actually "happening" and I don't believe I stated that it was.
Like I said, it is impossible for the hardware to achieve the full theoretical benefit from each of its overclocked subsystems as this would imply that performance was fully limited by each subsystem, which it just not possible.
If I was confusing on that point then I do apologize.
Here's what I know, though: 1) the reported clock speeds are the clock speeds the hardware was actually running at and 2) the performance numbers are definitely correct.
I fully realize I didn't do a good job of explaining why the two above points are both true ... mostly because I have no idea why.
I tried to paint the picture that what actually happened was not impossible, while (I thought) making it clear that I don't actually know what causes the observed effect.
Kibbles - Thursday, June 4, 2009 - link
Great article. I especially liked the 3 linked graphs. One question though. I've been wondering how much power the lastest graphics cards use when you underclock them to the lowest possible while idling, or does the hardware do it automatically? For example, I have my 2D mode on my 8800gtx set to only 200mhz core/shader/memory using nibitor. Or would it matter?DerekWilson - Thursday, June 4, 2009 - link
All the current gen cards do significantly underclock and undervolt themselves in 2D mode. They also turn off parts of the chip not in use.I believe you can set the clocks lower, but the big deal is the voltage as power is proportional to frequency but proportional to the square of voltage. I don't /think/ it would make that much difference in 2D mode, but then it's been years since I tried doing something like that.