Investigations into Athlon X2 Overclocking
by Jarred Walton on December 21, 2005 12:00 PM EST- Posted in
- CPUs
Application Performance
Moving on to more real-world applications, we'll start with Winstones 2004. Winstones run a bunch of scripts in a variety of applications. The problem is that many of the scripts simulate user input and operate at speeds no human can approach. Rendering an image, encoding audio or video, etc. can take time. Word, Excel, and Outlook, on the other hand, are almost entirely user-limited. While the fastest systems do perform higher, in every day use, the typical office applications are going to run so fast that differentiating between the various overclocked settings is difficult, if not impossible.
Normally (i.e. using the default settings), Winstones will defrag the hard drive, run through the script, reboot the system, and then start over. It does this five times, and the highest score is shown. Depending on your personal view, however, the reboot isn't really necessary. In fact, eliminating the reboot will generally result in higher scores on subsequent runs - a difference of as much as 15%. The Venice overclocking article allowed reboots to occur, but this time, I took that step out. The result is slightly higher scores compared to the Venice article, helped in part by the dual cores, but also by the removal of the reboots. The individual articles present comparable results, but you shouldn't directly compare the results. Honestly, Winstones isn't a terribly important measurement of performance anyway, but some people still worry about application performance.
We get a decent performance increase from overclocking, but nowhere near the theoretical maximum. If you look at single-core vs. dual core performance, it's also obvious that Winstones doesn't benefit much from the addition of a second core. That's understandable, as the tests are not done in a multitasking fashion, but newer multimedia applications should show more of a performance difference. Going from 2.0 GHz to 2.7 GHz represents a 35% CPU performance increase. In the Business Winstones test, we see a range from 26.6 to 31.9, a 20% increase. The Content Creation test gives a slightly larger increase, ranging from 33.9 to 42.9 - 27% more performance. Basically, what we're seeing is that Winstones really isn't completely CPU limited.
The different memory types also have very little impact on performance. Overall, the fastest RAM configuration ended up being the 2 GB set up, but only by a small percent - well within the margin of error for Winstones. The value memory is the slowest performer, which it should be given the lower price, but the difference isn't nearly as large as some people would expect. If you're going for a relatively expensive dual core processor, the results here aren't very conclusive. Still, if you need multitasking performance, more memory is a reasonable accessory.
Similar in some ways to Winstones performance, PCMark attempts to gauge system performance. The results are a little more theoretical, but the simulated applications are, in many instances, more recent and will benefit more from dual cores. PCMark also includes some 2D and 3D graphics tests, which make the GPU somewhat important to the overall score. If you compare results to those of the Venice article, you'll need to take the 7800GTX impact into consideration.
The difference between the slowest and fastest scores for our configuration is about the same as Winstones. PCMark04 goes from 5852 to 6999, a 20% increase. Unfortunately, PCMark04 was one application that consistently crashed at 2.4 GHz and above. Actually, crashed isn't the correct term; the grammar portion of the third multitasking test repeatedly failed. However, this is the only test that failed consistently above 2.4 GHz, so it's something of an anomaly. Everything appeared to run without issue, and we could get results for test 3 on its own, but we couldn't get the entire suite to complete. PCMark05 shows less of a difference, ranging from 5089 to 6101 (20%). PCMark05 also required the installation of the AMD CPU driver in order to produce acceptable scores. Without the driver installed, all HDD tests scored close to 0, severely impacting the results.
Both of the PCMark tests serve as great stress-tests of CPU overclocks, which is one of the reasons why we include the results. The issues with PCMark04 are difficult to explain, since in other testing, we have felt that PCMark05 was more strenuous. We did run all of the other tests in both PCMark products (scores not shown), and all of them passed. If we had encountered additional errors in either one, we would be more hesitant to call the 04 results into question, but for now, we're relatively confident that the 2.6 GHz overclock is stable.
In case the graphs don't convey this fact well enough, our standard application scores benefited very little from the use of higher quality RAM. The addition of a second core also didn't help a whole lot in many instances, which is generally true of real world application performance. Other tasks will definitely benefit, and the overall user experience feels smoother and faster with a dual core chip, but if you mostly just surf the web, you'll be wasting money on such a fast system.
Moving on to more real-world applications, we'll start with Winstones 2004. Winstones run a bunch of scripts in a variety of applications. The problem is that many of the scripts simulate user input and operate at speeds no human can approach. Rendering an image, encoding audio or video, etc. can take time. Word, Excel, and Outlook, on the other hand, are almost entirely user-limited. While the fastest systems do perform higher, in every day use, the typical office applications are going to run so fast that differentiating between the various overclocked settings is difficult, if not impossible.
Normally (i.e. using the default settings), Winstones will defrag the hard drive, run through the script, reboot the system, and then start over. It does this five times, and the highest score is shown. Depending on your personal view, however, the reboot isn't really necessary. In fact, eliminating the reboot will generally result in higher scores on subsequent runs - a difference of as much as 15%. The Venice overclocking article allowed reboots to occur, but this time, I took that step out. The result is slightly higher scores compared to the Venice article, helped in part by the dual cores, but also by the removal of the reboots. The individual articles present comparable results, but you shouldn't directly compare the results. Honestly, Winstones isn't a terribly important measurement of performance anyway, but some people still worry about application performance.
We get a decent performance increase from overclocking, but nowhere near the theoretical maximum. If you look at single-core vs. dual core performance, it's also obvious that Winstones doesn't benefit much from the addition of a second core. That's understandable, as the tests are not done in a multitasking fashion, but newer multimedia applications should show more of a performance difference. Going from 2.0 GHz to 2.7 GHz represents a 35% CPU performance increase. In the Business Winstones test, we see a range from 26.6 to 31.9, a 20% increase. The Content Creation test gives a slightly larger increase, ranging from 33.9 to 42.9 - 27% more performance. Basically, what we're seeing is that Winstones really isn't completely CPU limited.
The different memory types also have very little impact on performance. Overall, the fastest RAM configuration ended up being the 2 GB set up, but only by a small percent - well within the margin of error for Winstones. The value memory is the slowest performer, which it should be given the lower price, but the difference isn't nearly as large as some people would expect. If you're going for a relatively expensive dual core processor, the results here aren't very conclusive. Still, if you need multitasking performance, more memory is a reasonable accessory.
Similar in some ways to Winstones performance, PCMark attempts to gauge system performance. The results are a little more theoretical, but the simulated applications are, in many instances, more recent and will benefit more from dual cores. PCMark also includes some 2D and 3D graphics tests, which make the GPU somewhat important to the overall score. If you compare results to those of the Venice article, you'll need to take the 7800GTX impact into consideration.
The difference between the slowest and fastest scores for our configuration is about the same as Winstones. PCMark04 goes from 5852 to 6999, a 20% increase. Unfortunately, PCMark04 was one application that consistently crashed at 2.4 GHz and above. Actually, crashed isn't the correct term; the grammar portion of the third multitasking test repeatedly failed. However, this is the only test that failed consistently above 2.4 GHz, so it's something of an anomaly. Everything appeared to run without issue, and we could get results for test 3 on its own, but we couldn't get the entire suite to complete. PCMark05 shows less of a difference, ranging from 5089 to 6101 (20%). PCMark05 also required the installation of the AMD CPU driver in order to produce acceptable scores. Without the driver installed, all HDD tests scored close to 0, severely impacting the results.
Both of the PCMark tests serve as great stress-tests of CPU overclocks, which is one of the reasons why we include the results. The issues with PCMark04 are difficult to explain, since in other testing, we have felt that PCMark05 was more strenuous. We did run all of the other tests in both PCMark products (scores not shown), and all of them passed. If we had encountered additional errors in either one, we would be more hesitant to call the 04 results into question, but for now, we're relatively confident that the 2.6 GHz overclock is stable.
In case the graphs don't convey this fact well enough, our standard application scores benefited very little from the use of higher quality RAM. The addition of a second core also didn't help a whole lot in many instances, which is generally true of real world application performance. Other tasks will definitely benefit, and the overall user experience feels smoother and faster with a dual core chip, but if you mostly just surf the web, you'll be wasting money on such a fast system.
46 Comments
View All Comments
JarredWalton - Wednesday, December 21, 2005 - link
Ugh - at a comment on is an article that true special attention to the fact that the graphs aren't zeroed. I think in the process of tweaking article to get things to look right, I accidentally deleted that paragraph. I have now http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">added to paragraph back in.If I start all the graphs at zero, everything overlaps and you can't really see what's going on. In some cases, everything is still overlapped a lot (FEAR). I normally hate nonzero graphs, but when the results are all so close together, that's no good either for readability.
BigLan - Wednesday, December 21, 2005 - link
Well, if everything is overlapping that much, it's likely that the results are too close to be really meaningful. The FEAR graph is a pretty good example of <a href="http://www.amazon.com/gp/product/0393310728/>Ho...">http://www.amazon.com/gp/product/0393310728/"... to Lie With Statistics</a> ;)BigLan - Wednesday, December 21, 2005 - link
^ Still need my edit function for comments. :pDammit
JustAnAverageGuy - Wednesday, December 21, 2005 - link
:)JarredWalton - Wednesday, December 21, 2005 - link
^ Still need my edit function for comments. :pThat was supposed to say, "I had a comment in this article that drew special...." That will teach me to trust my speech recognition software.
Hacp - Wednesday, December 21, 2005 - link
I have found that past 2.6, the heat and temps increase dramatically. Nice to know that anandtech got the same results as well.Yianaki - Wednesday, December 21, 2005 - link
SuperPI crashes help! I have a Opteron 165 with 1 gig of value PC4000 kingmax ram at 133 2.5-4-4-8 2t. Board is OC to 1.4v at 9x277 = 2.494. I have run two torture versions of prime 95 (one of the CPU intensive, one of the ram intensive) on each processor, for a total of 4 prime95's. At the same time I run 3dmark 2005. At the same time I run winamp with visualizations on. I leave this on all night 9hrs+ in a loop. No errors at all. No buggyness at all. I game for 3 or 4 hours at a time and no problems.But I just read this article and SuperPI was mentioned and I never used that before and I tried it. It wouldn't work unless I lowered my overclock to 2.00 which is unacceptable to my sorry ass. I KNOW my system is unstable. I just was wondering if it mattered as the computer is completely stable. I am guessing that prime95 just rounds off answers and they don't have to be exact whereas I am guessing that SuperPI's answers are already known to be exact. Actually SuperPI runs fine but if I open up a second copy from a second folder and attach the affinity to both processors SuperPI will have errors as soon as I start it 1second of starting the 2nd process. Any ideas??
Could it possibly be my motherboard or ram as both are 'value' versions not OC specialty items. I have already played around with rendering divx movies and playing doom no problem. Will I probably have some problem down the road or like some small encoding error or dvd writing error that is due to my overclock. I Overclocked my old PII too high and it was spewing out bad math. I did all these chem reports in college and was getting completly off the wall numbers (I never tested my PII oc in prim95). Is this the same or does the error correcting in my programs that I am running in XP make this point moot.
Man I wish I never read about superPI poo :<
Furen - Wednesday, December 21, 2005 - link
Superpi and prime95 work differently. I think superPI is more reliant on memory bandwidth (if you're doing something like 32M superPi) so your ram may be the problem (I, personally, like crucial ram, I've never had any problems with it at all, and its not much more expensive than the generic crap). If your system doesnt crash when you're doing whatever it is that you do on your computer then you're fine, though, but I'd still try to work on the ram to see if you can get it to be superpi stable.Yianaki - Wednesday, December 21, 2005 - link
It just gets wierder and wierder... If I don't set the affinity manually in the task manager it will run through to the end and the CPU's will both be at 100%. But if I set them manually in the task manager before I start 32M the second one I start always crashes? I am guessing that the task manager isn't running the processors at 100% or something, as the windows task manager is automatically putting the loads on the two cpus and for a milisecond one isn't doing anything???? My Memory is up to 95% utilization... This proggie sucks if you ask me.Yianaki - Wednesday, December 21, 2005 - link
Thanks for the feedback. BTW it is Kingmax Super Ram, Dual channel mate. My motherboard is a ASRock 939 dual sata-2. Bought it because it runs AGP and PCIe quickly. I have my ram underclocked normally it can run at 200 but is running at 133 normally, I also lowered it to 100. I lowered all the timings lower than what the spd says. None of these things help... I am really confused. I run the Blend (ram intensive) test on each processor at the same time as the FFT test in prime 95. Memory usage goes up to 1.5 gigs total (I only have a gig), so that is using all the memory + page. But there is no error at all. I am a little dumbfounded but I have been thinking about it and my computer doesn't have any 'random' errors which is fine. Cept for firefox 1.5 and it had the same occasional problems before my upgrade. Oh well hope everything stays stable. Thanks for the feedback.