Comments Locked

163 Comments

Back to Article

  • SonicIce - Monday, November 14, 2011 - link

    cool good review.
  • wharris1 - Monday, November 14, 2011 - link

    It would be interested to test the OC'd SBE vs an OC'd SB; I suspect that the 2x advantage of the SBE would fall back in line to around the ~30-40% speed advantage seen in non-OC'd testing (in heavily threaded workloads). I have the feeling that between being defective xeon CPU parts and lacking more SATA 6Gbs as well as USB 3.0 functionality on the motherboard side, this release is a bit hamstrung. I be that with the release of Ivy Bridge E parts/motherboards, this combo will be more impressive. Part of the problem is that the regular SB parts are so compelling from a price/performance perspective. As always, nice review.
  • Johnmcl7 - Monday, November 14, 2011 - link

    I thought that odd as well as it almost implies the regular Sandybridge processors are poor overclockers when there are results for the new processor overclocked and Bulldozer overclocked. I guess though it's more it would be interesting to see rather than actually change anything, I currently have an i7 960 and was hoping for an affordable six core processor but it's looking like I'll wait until Ivybridge now
  • Tunnah - Monday, November 14, 2011 - link

    although i can understand the expectation of all 6 ports being sata 3, maybe the reasoning is implementing it would probably be pointless for 99.9% of users - i can't even begin to imagine any none-enterprise usage for 6 SSDs running at max speed!
  • Exodite - Monday, November 14, 2011 - link

    While I personally don't disagree with most people not needing more than two SATA 6Gbps ports you have to keep in mind that 99.9% of all users have no need for the SB-E /platform/ in its entirety.

    Since it's squarely aimed at workstation power users and extreme-end enthusiasts, those last 0.1% of users if you will, offering more SATA 6.0Gbps ports makes sense.
  • Zoomer - Monday, November 14, 2011 - link

    I can't imagine the area difference being an issue. Like, are sata3 controllers really that different once it was already done and validated? Having two types of sata controllers on chip seems redundant to me. It's like PCIe 1.0 vs 2.0; once you have the 2.0 implementationd one, there's no reason to have 1.0 only lanes since it is backwards compatible.
  • Jaybus - Tuesday, November 15, 2011 - link

    The reason for keeping SATA 3Gbps and PCIe 1.0 is not a die area issue or lack of reasoning. SATA 6Gbps takes considerably more power than 3Gbps, and PCIe 2.0 likewise consumes more power than 1.0. It's simply the physical reality of higher transfer rates. SB-E is already at 130 W, so there simply isn't room in the power envelope to make every interface the highest speed available.
  • MossySF - Tuesday, November 15, 2011 - link

    We ran into this problem. Our data processing database has 1 slow SSD for a boot drive and 5 x Sandforce SATA3 SSDS in a RAID0 array ... and we can't do even half the speed the SSDs can run at.

    You might say why would a non-enterprise user being using this many SSDs? Uh, why would a non-enterprise user be running this obscenely fast computer? You need this much speed to play Facebook Farmville?
  • ltcommanderdata - Monday, November 14, 2011 - link

    Given Ivy Bridge is coming in a few months, perhaps you could comment whether SB-E is worth it even for power users at this time? Has there been indications that high-end Ivy Bridge will likewise launch much later than mainstream parts? Is LGA 2011 going to be around a while or will it need to be replaced if high-end Ivy Bridge decides to integrate an IGP for QuickSync support and as an OpenCL co-processor?
  • DanNeely - Monday, November 14, 2011 - link

    I don't think Intel's spoken publicly about IB-E yet.

    That said, Intel hasn't done socket changes for any of the other recent die shrinks so I doubt we'll see one for ivy. Incremental gains in clock speed, and possibly pushing more cores down to lower price points ($300 6 core, or $1000 8 core) are the most likely results.

    OTOH if its launch is as delayed as SB-E's was Haswell will be right around the corner and there will again be the risk of the new quad core wiping the floor with the old hex for most workloads.
  • MOBAJOBG - Monday, November 14, 2011 - link

    Thanks for performing the analysis and share those information with us.
  • tenks - Monday, November 14, 2011 - link

    Am I the only one disappointed with this review?

    nad, you usually give some amazing insight beyond the simple #s..In this case talking about whats coming up, if this is worth it compared to upcoming stuff in the pipe, or if the rumors about a respin with fixed features, or an allusion to IVy..etc etc etc..?

    You've done it in the past. Are you just not allowed to touch on any of this per Intel?

    It was a very vanilla review and I dont mean any disrespect, but thats not why I come here for.
  • Anand Lal Shimpi - Monday, November 14, 2011 - link

    I appreciate the criticism. There are a bunch of things I wanted to do that I simply ran out of time for. I added another couple of paragraphs at the end of the conclusion, hopefully directly addressing your concern.

    To answer you here though: we may see a new chipset offering next year, but IVB-E will probably be at least a year out from now (perhaps even longer). If you need the core count, you will probably be fine on SNB-E while the rest of the world moves to quad-core IVB in the middle of next year.

    Take care,
    Anand
  • tenks - Monday, November 14, 2011 - link

    Thanks for the response and updating the review..really cool of you. I didn't mean to come off not appreciative, because I am. Just a long time reader who loves the extra insight, reading between the lines and the dot connecting that you do so well...But the more I think about it, maybe it's just the platform itself that I'm disappointed with? Maybe there is no insight really to be had..There is no real "juice" or cool new info we didn't already know about. I guess with all the silence on the platform, even at IDF, I was hoping for that "And one more thing.." feature in SB-E that we didn't know about..

    Also, forgive me but I have to try..I know you know something..When well the new stepping and 8-core DESKTOP (EE) skus hit?
  • THizzle7XU - Monday, November 14, 2011 - link

    Will there be a video review for this given the time? Your video reviews have been awesome and I really enjoy the conversation type of setting you present. Don't let anyone complain that they are too long :)

    Also, I was looking forward to this release, but in the last week I've decided to hold off for a while and see what happens with Ivy Bridge. My alternative upgrade ended up being a 512 GB Crucial M4 for my SATA 3 Sandy Bridge laptop and basically had a trickle down effect with moving my Intel 320 SSD to my Core 2 Quad desktop, desktop Intel G2 SSD to PS3, etc. I felt that was a better way to spend $700-$800 at this point for an upgrade that benefited all my devices instead of just my desktop. With the 22 nm process, it there any chance that the mainstream Ivy Bridge will see a 6-core chip? I thought I read some speculation on that...
  • yankeeDDL - Monday, November 14, 2011 - link

    Call me cynical, but reading this review I couldn't help but thinking about AMD Bulldozer's fail.
    Would we have the 3960X priced $999, if the FX8150 had been able to deliver decent performance (meaning, an 8-core chip beating the i7 2600 by " a little", at $250)?

    And the X79 looks just sloppy.

    I'm afraid we're starting to see the effect of poor execution by AMD ...
  • velis - Monday, November 14, 2011 - link

    Yep, agree 100%.
    The chipset released is - as Anand said - a rebrand of existing one. There is absolutely no reason at all to not include SATA 3 and USB 3 all across except if all budget for development was cut.

    And the CPU is actually a step back from existing Sandy bridge offerings. No, that was an understatement - it's just a binned existing offering.

    Next I'm expecting Ivy delay into late next year at best unless AMD gets its act together.
  • JlHADJOE - Monday, November 14, 2011 - link

    Yes of course it will still be priced at $999.

    Maybe the 2500k and 2600k will drop in price a bit if Bulldozer had been more competitive, but Intel's Extreme Edition chips have always been pegged at $999.

    Lest you forget, it was actually AMD's heydey that drove CPU prices up to the insane levels we are seeing today. Prior to Athlon's dominance, Intel's highest end chip during the Pentium II days, the PII-450 cost around $600.

    AMD went on to dominate the chip space after they stuck with their excellent Athlon line, and Intel floundered with the Presshot. Intel was being dominated badly but managed to compete on price. It was AMD who first announced the $1100+ Athlon FX, forcing Intel to re-socket the Gallatin Xeon and sell it as the Pentium 4 Extreme Edition, just undercutting the Athlon FX's price by selling it at $999.

    If you look at Intel's track record while they have the performance lead, they have actually been very reasonable with pricing. Recall the $200 Celeron 300A, for instance, which was pretty good at stock, and would overclock into a PII-450 destroyer. Just recently they introduced the brilliant Sandy Bridge, again at about $200-$250, despite the fact that the 2600k destroys their $999 980/990x in gaming.

    It was when AMD had the performance lead that the $1000 CPU segment was established,one that has, for better or worse, persisted to this day (despite intel being currently the sole occupant of that segment space).
  • yankeeDDL - Monday, November 14, 2011 - link

    Respectfully, I disagree.
    First of all, let me be clear: I am not rooting for AMD dominance: I am rooting for "competition" dominance. AMD jacked prices higher when teh Athlon was ridiculizing the P4, and would do so again -I'm sure- if it had the chance.
    But it should be recognized that Intel is doing it now. It has "the power" to do so, but that doesn't make it any better for consumers.
    Who "started" with the $1000-segment, frankly, is irrelevant: clearly Intel is enjoying it now, so let's focus on that, shall we?

    And yes, Intel's top-of the line has always been $999 for a while, however, it is pretty clear that the 3960X is only marginally better than the much more reasonably priced 3930. I don't recall such huge drop in performance/price ratio ever before (I don't have data, but it strikes me as a particularly bad ROI for the 3960).
    This said, the X79 is no excuse: re-branding is despicable, no matter who does it.

    Also, I think that if the FX8150 was half teh CPU it was supposed to be, instead of the half-ass that it is, Intel would/should have come out with a better improvement over the existing offer, than the 3960 is.
    They have delayed SB-E already by a bit, clearly, indicating that not always worked as they planned. If they had to provide an answer to AMD's "compelling" solution, I am sure they would have cranked up SB-E to be a more evident step forward over SB. But given Bulldozer's lack of performance, why bother? They could come out with SB-E "as-is" and not worry about the performance crown, no worry about the ROI, and no worry about cutting features.
  • just4U - Monday, November 14, 2011 - link

    In my opinion, looking at these results Amd's FX8150 isn't so much of a fail after all. Sure it doesn't compare to this beast but they both seem to shine in multi threaded apps and don't seem to be geared up for desktop users. I was expecting to be blown away with the numbers here.... I am not.
  • yankeeDDL - Monday, November 14, 2011 - link

    That's what I was doing: blaming AMD. Intel is doing what any company that is interested in making money/profit would do.
  • yankeeDDL - Monday, November 14, 2011 - link

    Sorry ... I replied to the wrong post :) I meant to hit the one below!
  • JlHADJOE - Tuesday, November 15, 2011 - link

    If you compare it to the extreme edition chip, then Bulldozer looks like good value. But then the 3960X is a halo model for those people who care nothing about price.

    Considering the 3930K gives you 95% the performance of the 3960X for 50% of the price (see xbitlabs), there's really no reason to get the X-edition chip unless you are building a system purely for bragging rights.

    Now when compared to the 3930K, the FX8150 doesn't look nearly as good. If we consider total platform costs with either system having a $300 motherboard and $200 in ram, then we're looking at something in the region of $750 for BD, vs $1000 for the SB-E. +$250 is small change for double the performance at a similar power envelope.
  • yankeeDDL - Tuesday, November 15, 2011 - link

    JIHADJOE, yes, that was my point.
    The 3960X is -arguably- the fastest CPU available, but it is faster by a tiny margin, while being radically more expensive than anything else.
    So yes, nothing looks as bad in terms of price/performance ratio, not even the FX8150. And that's, basically, bad for everyone (except Intel)
  • actionjksn - Monday, November 14, 2011 - link

    I agree too AMD's poor Bulldozer performance is having a huge effect on what we can get from Intel and at what price. And I blame AMD not Intel, because Intel or any other company is supposed to do what's best for their company. Heck if Intel did what we want they would probably cause serious harm to AMD. Because it would make AMD even less competitive. And I don't think Intel really wants to put AMD out of business.
  • yankeeDDL - Monday, November 14, 2011 - link

    That's what I was doing: blaming AMD. Intel is doing what any company that is interested in making money/profit would do.
  • GeorgeH - Monday, November 14, 2011 - link

    One of the bigger advantages of this platform to me is the 8 DIMM slots. However it was rumored that the first revision of SB-E had/has VT-D problems, which spoils things a little bit as VMs are one of the bigger reasons for lots of RAM. Can you confirm or deny if there are VT-D issues?
  • Anand Lal Shimpi - Monday, November 14, 2011 - link

    VT-d is supported, checking to see if there are any functional issues now.

    Take care,
    Anand
  • GeorgeH - Monday, November 14, 2011 - link

    ArsTechnica is reporting that VT-D is broken, but they don't cite any sources. A short article explaining what VT-D is for those who don't know and what (if anything) is broken might be in order.
  • Filiprino - Monday, November 14, 2011 - link

    That thing is really big!
    As for Quick Sync, it's not really useful. If you want quality you'll have to use x264, and with lower qualities x264 has some presets that are near as fast as Quick Sync.

    The winner combo is LGA2011+Kepler/Souther Islands.
    If you have a hole in your pocket you can throw in a dual socket motherboard, some liquid cooling and a big SSD.
  • Phylyp - Monday, November 14, 2011 - link

    Good review, thanks. I'm researching a new gaming PC, so this review is timely. Right now, seeing the comparative performance of the 2600K vs 3960X makes me want to wait for Ivy Bridge's 2600K replacement to see what sort of VFM that offers, compared to the 3930K.
  • DaFox - Monday, November 14, 2011 - link

    > Here we see a 40% increase in performance over the 2600K and FX-1850.
    On Page 5.
  • StealthGhost - Monday, November 14, 2011 - link

    I'm guessing by these results 2600k / 2500k is going to be a much better buy for gaming vs the 3930k

    The 2600k setup (mobo/cpu) I have is, from the prices in the motherboard and CPU review, 485 dollars cheaper than a 3930k+lga2011mobo setup ($400 vs $885). More than double what I paid and while the review for that one isn't out yet, even the 3960x isn't worth double just for gaming (obviously not what it is made for but people will buy it for gaming anyways).

    I'd like to see i7 930 vs the 3930k in the review if at all possible since that is the replacement, no? Obviously 2600k as well.

    Any idea when that one will be up?
  • yankeeDDL - Monday, November 14, 2011 - link

    Tomshardware had the exact same conclusion.
    The 3960X is a workhorse and, arguably, the fastest CPU available to desktops today, however, at $999 its value is just not there.
    For a shademore than 1/2 its price you get something only marginally slower, and only in certain scenarios. Gamers, for example, have very little benefits from the extra $350 over the 3930.
  • StealthGhost - Monday, November 14, 2011 - link

    Yeah according to their review in BF3 the $999 processor would give me 0 gains since I have one card (GTX 570). If I have 2 which I might later this month it would give me 3.5 fps more, but then I wouldn't be able to afford the 2nd card in the first place haha.

    Core scaling and cache useage isn't there yet for a lot of games I guess.
  • B3an - Monday, November 14, 2011 - link

    It's pathetic that the new game engine used for BF3 dont even make use of more than 4 cores, or extra cache. And this engine is meant to be for future games... not impressed.
  • Makaveli - Monday, November 14, 2011 - link

    so why don't you design a better engine ??
  • Anand Lal Shimpi - Monday, November 14, 2011 - link

    As soon as we can get our hands on a 3930K sample :)
  • iwod - Monday, November 14, 2011 - link

    QuickSync is really for casual users only. It doesn't offer any advantage over x264 apart from the saved CPU time. x264 is faster then QuickSync with Ultrafast mode, with better quality, and much better quality with other mode then QuickSync can ever get.

    So QuickSync is good if i want to transfer my media files to my portable, where quality doesn't really matter since i have the original file backed up. It is used for convenience.

    Anyone getting a SB-E and doing encoding would properly better off with x264 then QuickSync.

    The next version of QuickSync is said to have vastly improved quality and speed.
  • Manabu - Tuesday, November 15, 2011 - link

    Intel's QuickSync quality is somewhere around x264 superfast/veryfast, for the same bitrate. Ultrafast isnt the best tradeoff for speed and quality, as it gives up everything for speed.

    But I agree, someone with an Sandy Bridge E would be better off using x264 if he learns how to.

    A good comparison on speed and quality between GPU and CPU only encoders:
    http://www.behardware.com/articles/828-1/h-264-enc...

    The only thing they missed is that, if you only care about quality, and not an specific filesize/bitrate, you should be using CRF, and not 2-pass, and much less one pass with --bitrate.
  • xpclient - Monday, November 14, 2011 - link

    I read Intel is not going to release AHCI/SATA/Matrix RAID drivers for 32-bit Windows XP for X79. Why??? Without AHCI, performance is going not going to be optimal. For those that dual boot with Windows 7, this means changing the settings from AHCI to IDE every time you boot into XP.
  • Rick83 - Monday, November 14, 2011 - link

    And I heard that USB 3 had no drivers for my Win95 D either!
  • B3an - Monday, November 14, 2011 - link

    Because no one in there right mind would buy this high-end platform then use a decade old POS operating system. Waste of Intels time to support it and XP needs to die already.

    XP will take little advantage of SSD's or even new HDD's properly, some drives wont even work at all on it, and it lacks the same level of support and performance for CPU features and multiple cores that Win7 has. If you really need to run some old software just use XP mode in Win 7 or use VM software.
  • xpclient - Monday, November 14, 2011 - link

    Are you not aware of multi-core benchmarks? http://www.infoworld.com/t/platforms/generation-ga... Windows 7 does not perform faster than XP until you reach eight cores or higher.
  • Peskarik - Monday, November 14, 2011 - link

    "Just as with previous architectures, installing fewer DIMMs is possible, it simply reduces the peak available memory bandwidth."

    So, I have:
    Asus P8Z68-V Pro/Gen3 motherboard
    Intel Core i7 2600K
    Corsair Vengeance Red, 2x4GB, DDR3-1600, CL9@1.5V

    When I am in BIOS the memory is set to 1333Mhz, and I had to manually set it to 1600, though I am not sure whether it actually runs at this speed (how can I check?).
    Does the above sentence from the article mean that with 8GB RAM I do not have full memory bandwidth and that if I install 16GB RAM (I have 4 slots) then I have full 1600Mhz automatically?
  • kr1s69 - Monday, November 14, 2011 - link

    Sandy Bridge E is quad channel and so needs 4 DIMM's to obtain the peak bandwidth. Your Sandy Bridge system is dual channel and so you need 2 DIMM's to obtain the peak bandwidth. This is what you currently have installed, so no need to worry.

    The statement you quoted is basically saying this motherboard would boot with less than 4 DIMM's installed, that is all.

    Sandy Bridge defaults to 1333Mhz but you can change this in the BIOS to 1600Mhz as you have done. You can download CPU-Z to confirm what speed your RAM is set to.
  • piroroadkill - Monday, November 14, 2011 - link

    Set the O/C Profile to X.M.P.

    This uses the eXtreme Memory Profile provided by your RAM. Basically, standard SPD ratings don't go as high as 1600. XMP is required for this, it's a custom Intel extension to SPD.
  • piroroadkill - Monday, November 14, 2011 - link

    Although it's supposed to work, in my experience I had to set my RAM to DDR3-1600 anyway. Ho hum.
  • lukarak - Monday, November 14, 2011 - link

    This is not a great update. I wonder if 2011 will be the socket for ivy bridge E. If that's the case, it could be a good buy for somebody transitioning from an older system.
  • dlitem - Monday, November 14, 2011 - link

    There were some rumors / news circulating earlier that VT-d is bugged. Is that actually the case? SB-E is The workstation platform and losing VT-d is kind of a shame, as there actually are people who might have benefited from it.

    Also, a lot of us are still running our trusty 3 year old Quad-Bloomfields that have served us so well, so including one LGA1366 Quad-core would have be a really nice thing.
  • jabber - Monday, November 14, 2011 - link

    ...with Pixar updating their rendering farm?

    I cant think of many other big customers for this kind of chip.
  • randinspace - Monday, November 14, 2011 - link

    Wouldn't they be using Xeons?
  • gevorg - Monday, November 14, 2011 - link

    Could the wasted space for 2 fused cores and their L3 cache been used for HD2000 graphics? Wish Intel would have avoided wasting die space like this.
  • GL1zdA - Monday, November 14, 2011 - link

    It's not wasting, it's binning. They could either throw away 8-cores with damaged cores or sell them as six-cores, which is what they did.
  • BSMonitor - Monday, November 14, 2011 - link

    Actually it's not binning in this case. (some chips from the Xeon line might be) But these "desktop" CPUs are actually the 8-core Xeon line trimmed down in both cost and validation for use in Desktop PCs. Intel's currently roadmap is 6-core desktop CPU's at the high-end with extremely high memory bandwidth.

    It is cheaper for them to fuse two cores from an 8-core Xeon production line, than to redesign another CPU die for just the high-end 6-core desktop line. This class is by no means high-volume, hence yet another CPU die would be expensive.
  • GL1zdA - Monday, November 14, 2011 - link

    Could you test how Sandy Bridge-E behaves in vt_benchmark when GPU trancoding is used? I'm curious, if SBE will do better than a nVidia 580. And what is the difference between 2600K+580 and 3960X+580 when GPU transcoding is enabled.
  • Kevin G - Monday, November 14, 2011 - link

    Intel crippled both the CPU and the chipset with this launch. I was hoping to see an 8 core model at the high end. The chip design itself is an 8 core die so why not a fully functional chip for the low volume extreme edition? The performance benefits of the Core i7 3960X over the 990X mirror those from the 2600K over the 875K. (Well actually the 2600K vs. 875K comparison is much wider due to the clock speed differences, not just the architectural changes.) Sure it is faster at stock but generally not worth upgrading to, especially factoring in motherboard cost. Another let down is that the chip doesn't officially support PCI-E 3.0 True that their are no PCI-E 3.0 cards on the market today but there will be tomorrow. Not sure if this is additional crippling to distinguish the consumer chips from the coming LGA2011 Xeons or if there actually was a problem running at PCI-E 3.0 speeds.

    Speaking of Xeons, this article didn't mention if the system has the two QPI links disabled. If not, there could be the remote chance of a manufacturer releasing a board with the X79 using DMI and an X58 chipset hanging off of a QPI link. That would allow for another two full bandwidth PCI-E 16X slots at 2.0 speeds without the usage of a bridge chip.

    Then there is the X79 chipset. The reality is that it offers very little over the Z68. No USB 3.0 or additional SATA ports are the big things. Knowing Intel, we'll likely see a Z79 chipset that'll enable the SAS functionality for those that want more storage. Hopefully the hypothetical Z79 chipset will also use some of the PCI-E lanes from the CPU for additional bandwidth as an array of SSD's would easily be able to saturate the current DMI link.

    I'm also curious if these X79 consumer boards will allow for some overclocking with an LGA 2011 Xeon. I'm not expected full multiplier controller but rather feeding that 125 Mhz or 166 Mhz base clock to the CPU would suffice. Getting one of these consumer boards and paying the Xeon premium may wind up being the way to go for a true leap over of the Core i7 990X.
  • khanov - Monday, November 14, 2011 - link

    Could the wasted space for 2 fused cores and their L3 cache been used for HD2000 graphics? Wish Intel would have avoided wasting die space like this.


    This is a good question, I guess many would wonder why this is the case. To understand why requires a little insight into the manufacturing of silicon chips:

    As with almost any manufacturing process there are variables that differentiate one product coming off the same assembly line from the next. So for example at a car factory each 'identical' engine is in fact a little different from another, whether it be the balancing of the crankshaft or the exact fit of the bearings.

    With the manufacturing of CPUs (and indeed any silicon chips) there are also small differences between the chips that come off the same assembly line. If a chip has a defect for example (which happens too frequently) the defective area of the chip needs to be disabled. In essence this is why we are seeing Sandy Bridge-E cpus launching with disabled cores.

    The fully enabled cores (eight cores and 20MB L3 cache) are being sold as (or will soon be sold as) Xeon chips for the highest price. Somewhat lesser cores with defects are being sold as lower end Xeons with six cores or as consumer Sandy Bridge-E chips with six cores. Even more defective chips that can only work with four cores enabled are being stockpiled and will soon be sold as four core Sandy-Bridge-E and Xeon chips.

    So basically all these chips are manufactured with eight 'possible' cores. There is no wasted space on the die. However due to imperfect manufacturing processes some of these chips will have defects. In fact the larger the die area the more likely a defect occurs within each chip. With a very large die area for SB-E intel is now experiencing a problem more often seen by GPU manufacturers such as Nvidia. They are dealing with the problem in the same way: While Nvidia sell a GTX580 with die defects as a GXT570, intel sells a defective 8 core SB-E as a fully working 6 core Xeon or SB-E chip.

    Once we see an improvement of the manufacturing process (which is an ongoing process of improvement) we will start to see lower cost SB-E chips and also possibly fully enabled, defect-free SB-E for desktop/workstation users.
  • javalino - Monday, November 14, 2011 - link

    AGREE!!!!I will wait for a native 6 core, it will be much cooler , and maybe 1% faster
  • karakarga - Monday, November 14, 2011 - link

    Hi,
    From i386 DX-40 times, AMD build it's worst CPU ever. With 2 Billion transistors, instead of 0,9 Billion transistors. Typically new Bulldozer architecture have not much effect. So they doubled the transistor count for nothing! A very poor design. I am thinking AMD FX-8150 is not an eight core but considering new AMD bullozer 8C CPU as a 4 core but 8 threaded. Intel here reached 2,3 Billion transistors. But the performance is about 1,5 times better than AMD.

    Chipset details are known. The lack of having only two SATA-600 ports is a disadvantage. Lack of native USB 3.0 support prevents mainboards fully passing to the new speed standard.

    But AMD is also not good at chipset design too. I am currently using 990FX chipset with 1090T cpu. The memory performance is not reaching to 10GB/s with four DDR3-2133 rams working at 1600MHz default. If I put this CPU on a 790FX mainboard with only two piece DDR2-1066 rams, it passes 13GB/s. Which means the old serie upto 1100T are designed for DDR2 and not poolished and optimized fine for DDR3 memory. Only advantage here is having 6 SATA-600 ports, thats all!
  • Hauk - Monday, November 14, 2011 - link

    Was hoping to get 40 PCI-E lanes & 2600K performance for $300.. craptastic that they delay the 3820 till next year. Can't wait any longer, 2600K it is..
  • medi01 - Monday, November 14, 2011 - link

    Hi,

    why don't we see AMD cpu pricing along Intel CPU pricing?
  • g00ey - Monday, November 14, 2011 - link

    I think it is false advertising to call the Bulldozer 8C an eight core CPU. It doesn't really have eight cores, it's actually only four cores where they have added an extra ALU inside each core. It's like doubling the core count of the i7s because of the hyperthreading (SMT) feature. The addition of ALUs is nothing but an enhanced version of hyperthreading so a Bulldozer 8C is only 4 cores, 6C is only 3 cores and 4C is only 2 cores.

    But AMD say; No No No, there are two computation CORES inside each MODULE.

    What a BIG WAD of *BULLSHIT*!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    They should be thrown into jail for such fraudulent statements!!!
  • raddude9 - Monday, November 14, 2011 - link

    Nope Mr. Troll.

    Bulldozer 8C can run 8 threads simultaneously. Sandy Bridge E with it's 6 multi-threaded cores can only run 6 threads at the same time, the other 6 threads have to wait.
  • BSMonitor - Monday, November 14, 2011 - link

    Actually you are completely wrong.

    Hyperthreading actually allows 12 threads to fully utilize the resources of a 6 core processor.

    Whereby, Bulldozer simply has double the Integer hardware. Allowing it to run 8 integer threads simulateously. So long as there are that many consecutive integer computations in a row on each thread. Beware when floating point threads start to appear. And then it crawls back to 4 cores.
  • raddude9 - Monday, November 14, 2011 - link

    What did I say that is wrong?

    Hyperthreading means that each core holds the state of 2 threads. Only one thread can run at a time, usually when one thread stalls, the other thread can kick in. So, at best it can run 6 threads at once, the 6 hyperthreaded threads are waiting in the backround for their chance. But it still just runs 6 threads at once.

    You are trying to mislead people with your mis-information on the Bulldozer Floating Point unit. It's FPU can run as either two independent 128bit FPUs or a single 256bit FPU. So it can run two independent Floating Point instructions at once. So, Regardless of whether Bulldozer is running Floating Point or Integer instructions, it can still run 8 threads at once.
  • LittleMic - Tuesday, November 15, 2011 - link

    You are wrong because you are describing T1000 and T2000 CPU and not Intel with HT. Sun processor are indeed hiding memory access latency this way.

    Intel processors are actually scheduling micro instruction from both threads according to execution resources availabitity. It is quite old technology now so the white papers have disappeared from Intel web site, but if you have a look at
    http://en.wikipedia.org/wiki/Hyper-threading
    the picture on the right clearly shows that a pipeline stage can contain µ instructions coming from 2 threads
  • LittleMic - Tuesday, November 15, 2011 - link

    No edit...

    Finally found "official" paper directly from Intel :
    http://download.intel.com/technology/itj/2002/volu...

    Have a look at page 10 that shows that all the pipeline contains instructions from both threads simultanously.
  • Lord 666 - Monday, November 14, 2011 - link

    Anand,

    Have read mixed information on the release date for the 26xx series Xeons with respect to release date and architecture. Actually holding off a much needed server because have read either December or Jan.

    With the socket the same, is the reviewed SB-E the same design as the new Xeons? Will there be 3D design like Ivy Bridge?

    Thanks - Loyal reader for over 7 years
  • mwarner1 - Monday, November 14, 2011 - link

    I am impressed by how much memory you had in your 386SX! My first (IBM compatible) PC was a 486DX2-50 I bought for my Software Engineering degree and it only had 4MB. This was pretty much standard for the time.
  • mcturkey - Monday, November 14, 2011 - link

    Glad I'm not the only one who was thinking that! My 486 66 only had 4MB as well.
  • Anand Lal Shimpi - Monday, November 14, 2011 - link

    My 386 started with 4MB, but I kept it for a very long time as upgrading was fairly expensive. I eventually threw a ton of memory at it as my last upgrade to the platform :)

    Take care,
    Anand
  • BSMonitor - Monday, November 14, 2011 - link

    How big was your 386's hard drive?

    How many times over could you store it's entire contents in 8 DIMM's of DDR3 memory, now ?? And for probably less cost!

    Thought I saw a 16GB kit on newegg for $75? Lol!
  • just4U - Monday, November 14, 2011 - link

    I had a roomie 81 meg harddrive in my 386/16
  • khanov - Monday, November 14, 2011 - link

    "With the socket the same, is the reviewed SB-E the same design as the new Xeons? Will there be 3D design like Ivy Bridge?"

    1. It is the exact same die as the new Xeons, although of course different parts are harvested for each market.

    2. Yes there will be a 3D transistor design (according to rumors) but this will be Ivy Bridge-E and will not launch until at least late 2012.
  • gamoniac - Monday, November 14, 2011 - link

    Anand,
    More and more power users are running VMs on their desktops or workstation. With most Intel and AMD CPU now support Intel-VT or AMD-V, I notice a lack of measurement in this department in pretty much all online reviews. When you update your test suite, could you possibly include some sort VM test? Note: If so, could you possibly run the VM test on SSD to eliminate HDD limitation?

    Thanks for the great review and conclusion, as always.
  • Senti - Monday, November 14, 2011 - link

    I'm amazed how much fuss QuickSync is still generating in reviews. Let's face it - it's fairly useless in current state. Cool words "GPU video transcoding" can only impress casual users, not someone who cares about quality in first place and speed only after that.

    With time it will be even more useless if like GPU video decoding it's unable to work with 10 bit and 422/444 content (very likely).
  • gunslinger5577 - Monday, November 14, 2011 - link

    This review indicates no significant improvement with 2x 16x PCI-E lanes in SLI. However the ASUS X79 Pro MB review seems to indicate there is a measurable and significant at times advantage.
  • fishbits - Monday, November 14, 2011 - link

    Weird stuff. Why fret over on-die USB 3.0 when every mobo supports it? And mourning Quick Sync for a CPU that flies at encoding without it? Or when you'd already have an SNB with Quick Sync? Really unhappy with the new Porche's glove compartment...

    Love the CPU/Platform, but too pricey for how much I'd use it over what's currently available. Hoping against hope that mainstream Ivy offers 8 RAM slots, but not holding my breath.
  • DanNeely - Monday, November 14, 2011 - link

    Each addon chip the mobo makers include increases the cost of the board (not just the chip itself, but the engineering time needed to integrate it, and potentially (if enough are chips are added) by adding an extra layer to the PCB).

    You also take a hit in the number of PCIe lanes available for expansion slots. With legacy PCI gone from the southbridge we're unlikely to see any 4x electrical slots coming off of it. Audio, ethernet, and firewire, will take 1 lane each; USB3 controllers will take 1lane/2 ports, probably 3lanes total/board leaving only 2 for expansion slots. THe main impact here is just not being able to go all USB3 for the legacy free gloss without a major squeeze elsewhere. Scientific customers doing stuff that actually needs the PCE 3.0 bandwidth without needing 2x width cards could end up being dinged since it means several fewer total lanes for them to hook stuff up to.
  • Zak - Monday, November 14, 2011 - link

    I want native USB3 plus significantly higher number of PCIe channels so I can run two cards at full 16x and a decent RAID controller at 4x without having to pay over $300 for the mobo. Oh, and for god's sake say goodbye to the PCI slots please while improving the motherboard layout so dual slot cards don't cover any available PCIe slots.

    Bullshit like "Three PCIe x16 slots!!!! (running at 8x, 8x, 2x) make me sick. The latest Intel motherboards were rather underwhelming in terms of features.
  • chizow - Monday, November 14, 2011 - link

    It really seems as if Intel wants to kill off this high-end enthusiast desktop segment completely; what we have here is a by-product of their server market and perhaps the last of a dying breed. First sign was the change to multiple sockets and locking clock frequency on their non-enthusiast parts. Also, SB-E comes with a huge increase in platform cost compared to Nehalem that doesn't really justify the increase in performance over SB.

    $500 for the entry-level SB-E CPU and $300+ for the motherboard is going to be a bitter pill to swallow for those used to the $200-$300 entry-level Nehalem CPUs and $200 boards. I know there's going to be a 4-core part that may be closer to that price point sometime next year, but again, one has to ask if it will be worthwhile over a 2600K at that point, especially since the K is unlocked and the SB-E part isn't.

    Also factor in the reality PCIE 3.0 is going to be a negligible benefit of the chipset. Maybe if ATI/Nvidia's next-gen GPUs make use of the extra bandwidth. You also don't get any additional benefits in the way of SATA or USB support compared to last-gen SB products....its really quite disappointing for a chipset that was held off this long.

    Overall the performance looks good, but at the price and size....is this the path CPUs are headed for? Huge and hot like GPUs? I mean we thought Bulldozer was massive, SB-E is just as big but at least it delivers when it comes to performance I guess. I can see why Intel wanted to bifurcate their server/desktop business, but I think the unfortunate casualty will be the high-end enthusiast market that don't want to pay e"X"treme prices for the privelege.
  • redisnidma - Monday, November 14, 2011 - link

    Looking at these results, you have to wander what in the world was AMD thinking when they designed Bulldozer (AKA Crapdozer).

    Feel sorry for them. :(
  • just4U - Monday, November 14, 2011 - link

    For the most part Amd's bulldozer did give us 2500K speeds.. and multithreaded performance is there. This cpu is the fastest we've seen but it certainly doesn't blow one away in comparison to the 2600K. The Amd CPU is criticized for one thing really.. it's single threaded performance which is no better then it's cheapest proccessors.
  • Guspaz - Monday, November 14, 2011 - link

    A minor performance boost in most real-world scenarios, and yet a massive increase in cost and power consumption...

    This whole chip is basically a big kludge. Take an 8-core Xeon and disable a quarter of the chip, slap a "consumer" label on it, and call it a day? That's not even trying, that's just lazy.

    This chip is 50% faster than SNB in heavily multithreaded applications because it has 50% more cores. A much more interesting chip would have been to take the existing SandyBridge chips and increase the core count, rather than taking a Xeon and disabling parts of the core.
  • EJ257 - Monday, November 14, 2011 - link

    Actually isn't that basically what they did with this? They took 8 SB cores and threw it on the die, took out the IGP and dropped in a massive L3 cache. I mean if your going to be building a gaming rig based on SB-E, would you actually care about the IGP at that point when you got SLI or X-Fire GPUs?

    I understand how this move would make some people feel like they've been slapped in the face by Intel. Years as loyal customers and this time around they get a "crippled" part to call a flagship. Look at what the state of the high end CPU market is like. At this point Intel is dominating and there is really no incentive for them to do a completely different chip when a "crippled" Xeon can run circles around the best AMD has to offer. From their point of view this is the most economic way to do business. But yes, meh indeed when you already have a i7 2600K running smoothly.
  • adamantinepiggy - Monday, November 14, 2011 - link

    Or will there be a desktop version again without ECC support and a workstation Xeon version that does? (890x/990x vs x5680 Xeon) I'll take ECC support for RAM over faster RAM with eight populated slots please. The larger and larger memory amounts means more likelihood of bit errors, but for two generations of CPU's from Intel, no EEC RAM support on the CPU memory controller.
  • hechacker1 - Monday, November 14, 2011 - link

    I agree. With massive amounts of memory that you could potentially put onto this platform, I'd really like to have a version with ECC for the workstation.
  • BSMonitor - Monday, November 14, 2011 - link

    I understand the need to keep news positive for AMD. Competition and all. However, repeatedly stating over time that they are competitive on price is kind of misleading in the grand scheme. Each new CPU arch. from Intel yields double digit performance gains(lately). AMD's are often delayed and in BD's case yield backwards results in many benchmarks.

    The truth is, clock for clock, given as many transistors, given as much power and heat, AMD is grossly not competitive.

    The ONLY reason one can say that their chips are competitive in relation to price, is that they have NO other choice but to sell them at that price. AMD looks at where it's new CPU's relate in terms of performance to Intel's lineup and price accordingly. As many R&D $$, transistors, etc that go into each FX-8150, the flagship CPU should be at least competing with the 2600K, 990X, etc of the world. Forcing either Intel either to lower the $1000 tag on SNB-E or allowing AMD a $1000 alternative.

    However, all we get from AMD is mediocre, late to market attempts to "catch-up". My point, AMD needs a new infusion of engineers and/or new approach. A complete new idea/redesign/etc..

    Let's face it, the x86 market is now Intel x86. Perhaps, AMD should take what it knows in processor design and embrace a new idea.. Maybe a mixed ARM/x86 or an enhanced ARM 64-bit for desktop PCs. Something to stand out and deliver on. Pure x86, AMD is falling further behind. BD did not even catch up to 4 core SNB. And Ivy Bridge is being held back, as there is no really competition for it. The landscape looks like AMD will be out of the desktop CPU space within a year or two. Or at least religated to Cyrix status from the 2000's.
  • bji - Monday, November 14, 2011 - link

    Your point about AMD's prices don't make any sense. You're saying that AMD is not a good value because it is selling its chips at a price that makes them a good value rather than making faster chips and selling them for more money like Intel does?!?

    Since when does "a good price:performance ratio" not equal "a good value" just because the CPU vendor doesn't have high (or any!) profit margins?
  • actionjksn - Monday, November 14, 2011 - link

    I'm pretty sure the motherboard makers will add the extra ports, even though the controllers aren't built into the processor or chipset.
  • just4U - Monday, November 14, 2011 - link

    mmm double standards..
  • hechacker1 - Monday, November 14, 2011 - link

    No, not double standards.

    This chip does outclass it's competition (50 plus percent) in some cases that are highly threaded.

    It actually uses all of those transistors to be a speed daemon. Bulldozer just doesn't, even with its 2 billion transistors.
  • Phylyp - Monday, November 14, 2011 - link

    Does the 2+ billion transistor count reflect the 2 cores that are fused also, or only the active transistors?
  • iceman-sven - Monday, November 14, 2011 - link

    I was interested in SB-E and the X79 platform, but i will skip it and continue to use my i7 965X. Maybe I go for IB-E, but it is doubtful, when the Nvidia Kepler GPU is released. What I really want is Haswell-E on something like a EVGA Classified Super Record 2 (SR-2) class Motherboard.
  • cearny - Monday, November 14, 2011 - link

    Thanks for including the Chromium build time test :)

    For GCC people out there, why not a Kernel build time test in the future also?
  • DanNeely - Monday, November 14, 2011 - link

    Actually why not do a chromium build in GCC to make the two numbers more directly comparable. Doing it this way will give a 'free' article on which compiler is better.
  • ckryan - Monday, November 14, 2011 - link

    What about the corresponding release of Intel's next SSD?

    We had speculated that since it missed it's initial window that it would have been released on the 14th with SB-E. I guess we were wrong again.

    Anyone want to field this?
  • xpclient - Monday, November 14, 2011 - link

    Why? You have got newer multicore specific benchmarks that prove otherwise wise guy? Then share them.
  • xpclient - Monday, November 14, 2011 - link

    Here's a Jan 2010 benchmark: http://www.infoworld.com/d/windows/windows-7s-kill... Fact: You would need 8 core machines before Windows 7 can outperform XP.
  • yankeeDDL - Monday, November 14, 2011 - link

    I'm with you xpclient.
    I will never understand Microsoft's fanboys. Why do they expect the OS to make such huge impact on the benchmarks?
    It's like for motherboards: you can differentiate for ease of use, stability, features, supported hardware ... but the benchmarks, will substantially be the same.
    XP trails Windows7 on multi core because it was never designed to support so many cores, and MS has no interest in updating it.
  • Per Hansson - Monday, November 14, 2011 - link

    Hi, what resolution and in game graphical settings where used for the World of Warcraft gaming test?
    It always amazes me how well that game scales with super high-end CPU's, but if it's at just a silly low resolution it does not really matter, so, which is it? (This isn't mentioned in "bench" either....)
  • Per Hansson - Monday, November 21, 2011 - link

    Here is Anand's reply to this question incase anyone cares ;)
    ---
    1680 x 1050, 4X AA, all detail settings maxed (except for weather) :)

    Take care,
    Anand
  • Mightytonka - Monday, November 14, 2011 - link

    Typo on the second page. Should read RST (Rapid Storage Technology), right?
  • Ocire - Monday, November 14, 2011 - link

    Nice review! :-)

    If you want to test PCIe bandwidth, you could use the bandwidth-test that comes with the CUDA SDK. It's easy to setup, and you can also test configurations with multiple GPUs. You should get quite reliable results for pure PCIe Gen.2 performance with that.
    It would be really interesting to get some performance numbers for PCIe as that is the bottleneck in quite a few GPU-computing scenarios.

    Cheers!
  • LancerVI - Monday, November 14, 2011 - link

    Sounds like a great, enthusiast proc, married to a mainstream chipset at enthusiast prices.

    That means 'no joy' for me.

    I guess I'll be hanging on to my little i7 920 that could for a bit longer. Going on 4 years now. That's unheard of for me!!!
  • hechacker1 - Monday, November 14, 2011 - link

    I'm also going to hang onto the i7 920. Overclocked the x58 platform can still compete with the best of of Sandy Bridge for almost any workload. Sure we're missing some IPC and power enhancements, but nothing worth spending serious cash on.

    What I'm looking at now is the Gulftown prices. I'm hoping they come down from the $1000 Extreme part, and perhaps we get an affordable 6-core chip for the X58 platform.

    I'd be happy to stick with Gulftown until we see affordable 8 core parts, or major IPC improvements.
  • Makaveli - Monday, November 14, 2011 - link

    the 980 non X version is already going for $550

    This is your next upgrade, as is mine i'm also on a 920.

    I doubt that price will drop any lower.
  • hechacker1 - Monday, November 14, 2011 - link

    Yeah I'm going to be watching the prices closely for a while.

    At $550 for the lowest end Gulftown on newegg, it's still not affordable for me.

    It's still cheaper than the intro prices of Sandy Bridge-E including a new motherboard though.

    I'll have to watch forums closely for people wanting to sell their chips, I imagine you can snag one used for a good deal.
  • davideden - Monday, November 14, 2011 - link

    I currently have a Core i7 2600K LGA 1155 processor. I am assuming that I won't be able to use this with the new LGA 2011 socket on the new Sandy Bridge E motherboards. Will there be any cheaper processors in the near future that are at the price point of the Core i7 2600K that are compatible? I was disappointed with not being able to utilize triple channel with my memory or being able to use all my sticks of ram with the current LGA 1155 motherboards. The quad channel ram along with the 8 slots have me most excited for the new platform as I do video editing/motion graphics/3D work. Thanks!
  • ggathagan - Monday, November 14, 2011 - link

    You assume correctly.
    As stated in the article, the 3820 is due out early next year and expected to run at about $300.
    If you look at the 1st page of the article, you'll note that the 3820 is a little faster than the 2700K, with the same max in turbo mode and a larger L3 cache.
  • DanNeely - Monday, November 14, 2011 - link

    Except that the 2700k is unlocked and the 3820 has a severe overclocking limit.
  • theangryintern - Monday, November 14, 2011 - link

    Thanks for the great review, Anand. I had been waiting for SNB-E to do an upgrade from my X58 Core i7, but now I'm thinking of saving some money and going with a regular Sandy Bridge, the gaming gains just aren't enough to justify the added expense.
  • Makaveli - Monday, November 14, 2011 - link

    That makes no sense there are no gaming games going from X58 to SB!

    You will still be GPU bottlenecked on most games!

    And a whole new build will be an added expense for no gain in Games!
  • Beenthere - Monday, November 14, 2011 - link

    Not much to see here except over-priced CPUs and mobos. Nice to see Intel fans smartening up and passing on these cash cows.
  • Lazlo Panaflex - Monday, November 14, 2011 - link

    Hi Anand, thanks for the review :)

    Apologize in advance if this was asked earlier, but what specifically is your criteria for determining a stable overclock? For example, do you run Prime95, large FFT's for a predetermined period of time, or perhaps IBT (Intel Burn Test) for a certain amount of runs? Or do you utilize some other tool? Just curious, since this question often pops up in the CPU forums, and everyone has their own opinion of what constitutes a stable overclock.

    Regards,
    LP
  • yankeeDDL - Monday, November 14, 2011 - link

    LOL. Supply and the mand is the cause. Jacking up the price is the effect. You are a bit mixed up it seems.
    Also, in the consumer market having the fastest CPU (or GPU) that you sell for a ridiculous price doesn't mean that someone buys it.
    How many people buy a HD6990? Few, and that's why supplies are so scarce.
    The 3960X is a "show off" chip that claims performance crown. Anyone in the right state of mind will not buy it: it is just not worth the money it costs, unless you're in absolute need of few extra % of performance.
  • JlHADJOE - Tuesday, November 15, 2011 - link

    We have chips priced at $1000 because the market has shown that it is willing to pay that amount to get the top-performing chip. It doesn't matter that AMD doesn't have an entry in that segment, because if it did, then we'd probably have AMD's FX-9300 or something priced at $900, while Intel sells their 3960X at $1100.

    This was the exact case when AMD was competitive, and their FX-57 was sold at $1100, vs Intel's Pentium EE which was going for $999. Was there a competitive AMD at the time? Yes. They were even in the lead. Were prices still jacked up? Yes.

    The $1000 CPU will only go away if we, as consumers wise up and say we are not willing to pay that much money for a chip.
  • karkas - Monday, November 14, 2011 - link

    Rapid STORY Technology?
  • GTVic - Monday, November 14, 2011 - link

    Lack of Quick Sync is not nearly the negative that the reviewer seems to think it is. It is not a well supported technology and not many people would use it on a day to day basis. This shouldn't even be mentioned in the article unless you also want to bring up support for Intel Viiv http://en.wikipedia.org/wiki/Intel_Viiv.
  • DanNeely - Monday, November 14, 2011 - link

    AMD's been selling 6 core Phenom CPUs since April 2010 (6 core opterons launched in jun 09). Prior to SB's launch they were very competitive with intel systems at the same mobo+CPU price points, and while having fallen behind since then are still decent buys for more threaded apps because AMD's slashed prices to compete.

    On the intel side, while hyperthreading isn't 8 real cores for most workloads 8 threads will run significantly faster than 4.
  • ClagMaster - Monday, November 14, 2011 - link

    This Sandy-Bridge-E is really a desktop supercomputer well-suited for engineering workstations that can solve Abequs or Monte Carlo Programs. With that intent, the Xeon brand of this processor, with eight-cores and ECC memory support, is the processor to buy.

    The Xeon will very likely have the SAS support that Anand so laments on a specialty chipset based on the X79. And engineering workstations are not made or broken with lack of native USB 3 controllers.

    DDR3 1333 is not slouch memory. With four channels of the memory there will be much faster memory IO than a two channel system on the i7-2700K with the same memory.

    This Sandy-Bridge-E consumer chip is for those true, frothing, narcisstic enthusiasts who have thousands of USD to burn and want the bragging rights.

    I suppose its their money to waste and their chests to thump.

    As for myself, I would have purchased an ASUS C206 workstation and a E3-1240 Xeon processor.
  • sylar365 - Monday, November 14, 2011 - link

    Everybody is seeing the benchmarks and claiming that this processor is overkill for gaming but aren't all of these "real world" gaming benchmarks run with the game as being the ONLY application open at the time of testing? I understand that you need to reduce the number of variables in order to produce accurate numbers across multiple platforms, but what I really want to know, more than "can it run (insert game) at 60fps" is this:

    Can it run (for instance) Battlefield 3 multiplayer on "High" ALONGSIDE Origin, Chrome, Skype, Pandora One and streaming software while giving a decent stream quality?

    Streaming gameplay has become popular. Justin.tv has made Twitch.tv as a separate site just to handle all of the gamers streaming themselves in gameplay. Streaming software such as Xsplit Broadcaster are doing REAL TIME video encoding of screen captures or Gamesource and then bundling for streaming all in one swoop and ALL WHILE PLAYING THE GAME AT THE SAME TIME. For streamers who count on ad revenue as a source of income it becomes less about Time = Money and more about Quality = Money since everything is required to happen in real time. I happen to know for a fact that a 2500k @ 4.0Ghz chokes on these tasks and it directly impacts the quality of the streaming experience. Don't even get me started on trying to stream Skyrim at 720p, a game that actually taxes the processor. What is the point of running a game at it's highest possible settings at 60fps if the people watching only see something like a watercolor re-imagining at the other end? Once you hurdle bandwidth contraints and networking issues the stream quality is nearly 100% dependent on the processor and it's immediate subsystem. Video cards need not apply here.

    There has got to be a way to determine if multiple programs can be run in different threads efficiently on these modern processors. Or at least a way to see whether or not there would be an advantage to having a 3960x over a 2500k in a situation like I am describing. And I know I can't be the only person who is running more than one program at a time. (Am I?) I mean, I understand that some applications are not coded to benefit from more than one core, but can multi-core or multi-threaded processors even help in situations where you are actually using more than one single threaded (or multi-threaded) application at a time? What would the impact of quad-channel memory be when, heaven forbid, TWO taxing applications are being run at the SAME TIME!? GASP!
  • N4g4rok - Monday, November 14, 2011 - link

    That's a good point, but don't forget that a lot of games are so CPU intensive that it would take more than just background applications to cause the CPU to lose it's performance during gameplay. I can't agree with the statement that streaming video will be completely dependent on the processor. The right software will support hardware acceleration, and would likely tax the GPU just as much as the CPU.

    However, with this processor, and a lot of Intel processors with hyper-threading, you would be sacrificing just a little bit of it's turbo frequency to deal with those background applications. Which should not be a problem for this system.

    Also, keep in mind that benchmarks are just trying to give a general case. if you know how well one application runs, and you know how well another runs, you should be able to come up with a rough idea of how it will handle both of those tasks at the same time. and it's likely that the system running these games is also running necessary background software. you can assume things like Intel's Turbo Boost controller or the GPU driver software, etc. etc.
  • N4g4rok - Monday, November 14, 2011 - link

    "but don't forget that a lot of games are so CPU intensive that it would take more than...."

    My mistake, i meant 'GPU' here.
  • sylar365 - Monday, November 14, 2011 - link

    "The right software will support hardware acceleration, and would likely tax the GPU just as much as the CPU"

    In almost every modern game I wouldn't want my streaming software to utilize the GPU(s) since it is already being fully utilized to make the game run smoothly. Besides, most streaming software I know of doesn't even have the option to use that hardware yet. If it did I suppose you could start looking at Tesla cards just to help process the conversion and encoding of stream video, but then you are talking about multiple thousands of dollars just for the Tesla hardware. You should check out Tom's own BF3 performance review and see how much GPU compute power would be left after getting a smooth experience at 1080p for the local machine. It seems like the 3960x could help. But I will evidently need to take the gamble of spending $xxxx myself since I don't get hardware sent to me for review and no review sites are posting any type of results for using two power hungry applications at the same time.
  • N4g4rok - Tuesday, November 15, 2011 - link

    No kidding.

    Even with it's performance, it's difficult to justify that price.
  • shady28 - Monday, November 14, 2011 - link


    Could rename this article 'Core i7 3960X - Diminishing Returns'

    Not impressed at all with this new chip. Maybe if you're doing a ton of multitasking all time time (like constantly doing background decoding) it would be worth it, but even in the multitasking benchmarks it isn't exactly revolutionary.

    If multitasking is that big of a deal, better off getting a G34 and popping in a pair of 8 or 12 core Magny Cours AMD's. Or, maybe the new 16 Interlagos core G34. Heck, the 16 core is selling for $650 at NewEgg already.

    For anything else, it's really only marginally faster while probably being considerably more expensive.
  • Bochi - Monday, November 14, 2011 - link

    Can we get benchmarks that show the potential impact of the greater CPU power & memory bandwidth? This may be overkill for gaming at 1920 x 1080. However, I would like to know what type of performance changes are possible when it's used on a top end Crossfire or SLI system.
  • rs2 - Monday, November 14, 2011 - link

    "I had to increase core voltage from 1.104V to 1.44V, but the system was stable."

    Surely that is a typo?
  • mino - Monday, November 14, 2011 - link

    "Quick Sync leverages the GPU's shader array"

    This is simply not true. And you know it. Shame.
  • Steelski - Tuesday, November 15, 2011 - link

    irrelevant CS4 test because someone buying this kind of hardware would appreciate the CS5 advantage other websites show.
  • jewie27 - Tuesday, November 15, 2011 - link

    I was waiting for X79 but after I read the initial reviews I bought a Z68 motherboard and 2500K cpu for gaming.
  • C300fans - Tuesday, November 15, 2011 - link

    Me too. 999$+X79 for 0% improvement in gaming. What a crab! Bulludozer seems not that crab comparing to 3960x overall.
  • yankeeDDL - Tuesday, November 15, 2011 - link

    Making unsubstantiated claims about something that is non-intuitive falls, in my dictionary, under fanboy-ism (if that's a word).
    The fact that Win7 "runs better" on a certain, relatively old, PC, is one thing. Stating that Windows7 is faster than XP (in spite of a documented benchmark proving otherwise) is another one.
    Like I said, you can compare OS in terms of HW support, ease of use, even responsiveness, however, neither of those translate into one OS beinf "faster".
    Faster means that when you run a benchmark (pick any of the ones that Anand run in this article), you get a noticeable increase in speed.
    The OSes provide the infrastructure to run applications, they cannot provide any fundamental speed difference, unless, of course, you have a PC without enough RAM, for example, and in that case the OS that uses less RAM will have an obvious advantage (because it offers more "free" RAM for apps to run), but that again, has nothing to do with one OS being faster: if anything, it is more efficient.
    I have 4GB on both my laptop (Win7) and on my desktop (WinXP) and the difference is negligible: I nearly always have more than 2GB of RAM committed, so it is no surprise that on your PC Win7 with ReadyBoost is faster: just spend ~$15 on 2GB of RAM and you'll see a huge performance improvement both on XP and 7.
  • jmelgaard - Tuesday, November 15, 2011 - link

    So "Faster" must not apply to the OS's capability to respond to the user, it must only apply to the OS's capability to server application requests?...

    Wait what?...
  • Kob - Tuesday, November 15, 2011 - link

    You guys need to look at the engineering of your requests: 6 sata3 ports require feeding 6*6gb/s = 36 Gb/s data, while the total max theoretical mem bw of the chipset is 37 Gb/s. Can't do that while also taking care of OS, apps and video memory requirements.
  • cbutters - Monday, December 12, 2011 - link

    6*6gb/s isn't going to be happening constantly.....you build out one bridge that has a certain amount of bandwidth, 12GB perhaps, I don't know, and let the ports use the available shared bandwidth, doesn't mean you can't add additional ports, this is one of the benefits of serial interfaces.
  • C300fans - Tuesday, November 15, 2011 - link

    Intel Gulftown 6C 32nm 6 1.17B 240mm2
    Intel Sandy Bridge E (6C) 32nm 6 2.27B 435mm2

    SB-E, What a crab! Double Transistors, Double size, merely 20% gain from SB 2600k. 999$ for this? I would rather get 2 pcs Interlagos 6200 instead.
  • sna1970 - Tuesday, November 15, 2011 - link

    using 5870 CF to show us that dual 8x PCIE are same as dula 16x is a mistake I am shocked some one like you fall in ...

    you should have tested 6990 in CF , or 590 ... and see the difference between 16x SLI/CF and 8x SLI/CF

    and how do you consider a 5870 a MODERN GPU ?

    Quote : "Modern GPUs don't lose much performance in games, even at high quality settings, when going from a x16 to a x8 slot."

    Answer : WRONG . try high end dual GPU cards in SLI/CF !
  • JlHADJOE - Tuesday, November 15, 2011 - link

    On Page 2, 'The Pros and Cons':
    > Intel's current RST (Rapid Story Technology) drivers don't support X79,

    Rapid Storage, perhaps?
  • jmelgaard - Tuesday, November 15, 2011 - link

    Computers are only getting faster one way today, and that is more cores, designing for up to a strict number of cores is merely stupidity in today's world.

    That said, developing games that support multiple cores might be somewhat more difficult than designing highly concurrent applications that processes data or request for data. (I can't say for sure as I have only briefly touched the game development part of the industry, but I work with the other part on a daily basis)

    But while you might save development cost right now going down that road, you will spent the savings ones you suddenly have to think 8 cores in.

    Carrying technical debt is never a good thing (And designing with a set number of cores in mind can to my programming experience only add that), it will only get more expensive to remove down the road, that has been proven to be true again and again.

    And that is even considering that Frostbite 3 might be developed from the ground up, they still have to think up the concept again, while had they gone for high concurrency, then that concept would already be in place for the next version.
  • TC2 - Tuesday, November 15, 2011 - link

    note,
    BD 4x2bc ~ 2B elements, 315mm2
    SB-E 6x2hc ~ 2.27B elements ~ +14%, 435mm2 ~ +38% (includes unused space for 2 more cores), up to 15MB cache, ...

    impressive at all!
  • C300fans - Tuesday, November 15, 2011 - link

    Intel Gulftown 6C 32nm 6 1.17B 240mm2
    Intel Sandy Bridge E (6C) 32nm 6 2.27B 435mm2

    I dont see any impressive thing. any performance improves?
  • Blaster1618 - Tuesday, November 15, 2011 - link

    Given QPI @ 3.2 Ghz 205 Gb/s (25.6 GB/s) also handled the PCI load, can't we have something in the middle. I'm still a little confused is DMI 2.0 still just mainly simple parallel interface where QPI is a high speed series interface?
  • C300fans - Tuesday, November 15, 2011 - link

    Just imagine DMI 1.0 is a 4pcs pci-e 1x 1.0.
    DMI 2.0 is a 4pcs pci-e 1x 2.0
  • jmelgaard - Tuesday, November 15, 2011 - link

    Clearly you didn't read a single of my points, or simply lack the understanding.

    Applications are not developed to target specific cores, you OS handles all that, it is a simple matter of pushing out jobs in threads or processes.

    Processing in 10, 100 or 1000 threads/processes is no more difficult than doing it in 4... it just requires you have enough "JOBS" to process (and that term was deliberately chosen)...

    This requires a different mindset though, and this might be more difficult to think of games that way right now, mostly because they have been use to running everything in that single game loop, but doing it now could be a rather good ROI down the road.
  • DarkUltra - Tuesday, November 15, 2011 - link

    How about overclocking with turbo boost enabled? I mean, if the 3960X is stable at 4.4GHz, can it be stable at 4.8GHz when games or applications only use four cores? Then it would overclock and perform as good as a 2600K with four heavy threads.
  • yankeeDDL - Tuesday, November 15, 2011 - link

    Guys, there are always people with more money than brain that will purchase just about anything.
    That's not the point. Having the fastest CPU makes it a status symbol and whoever makes it can have the luxury to price it in the $1000 range, for fools to buy.
    I don't know about CPUs, but I do know that the top performing GPUs (HD6990 and GTX590) are sold in extremely low volumes, both because of the relatively low ROI, both because the market is so little that inventory are scarce to begin with.
    So, you may be right on the CPU side, but in general, you're both wrong.

    This said, my point was that if AMD had performed and delivered a good CPU, instead of the FX8150, OR, the FX8150 at a good price point ($170, not $279), then Intel would have had a tougher time in pushing out the 3960X for this price, AND, it would have had to work harder on the chipset. However, because of the huge lead it has over AMD, Intel now can comfortably rebrand a "mid range" chipset and shove it to the customer who has no choice but take it if they want the best CPU.
  • retoureddy - Wednesday, November 16, 2011 - link

    I agree on the fact that only 2 6GB SATA ports are a disappointment. Interesting though is to run two SSD in RAID 0 on the intel controller. With two Kingston SSD I manage real good figures (Crystal Disk Mark) : 4000MB test -> 1040MB/s Read and 621MB/s Write in (SEQ) / 675 and 481 (512K) / 28 and 253 (4K) / 279 and 405 4K QD32. I never managed this kind of throughput on the Z68 or P67 on-board controllers. These numbers are getting close to hardware RAID controllers like ARECA and LSI. I would have been interested to see where the bottleneck lies if X79 would have had more ports. Even though X58 is 3GB Sata you had no problem bottle-necking the Intel RAID controller at around 800MB/s.
  • Valitri - Wednesday, November 16, 2011 - link

    Good review as always.

    Turns out to be slightly less than I was expecting. The performance "jump" from an 1155 SB just isn't there for generic enthusiasts and gamers. Perhaps encoders, renderers, and mathmaticians will enjoy the performance but it doesn't do much for me. Makes me very happy I stepped to a 2500k and I look forward to Ivy Bridge early next year.
  • Gonemad - Wednesday, November 16, 2011 - link

    If there are 16GB DIMMs, and this sucker has 8 DIMMs sockets... 128GB in a home system... hmmm. It makes SSDs all the less appealing. (Specially because you just blew lots of money in DIMM memory, but still...). Pop in a Ramdrive, wait 5 minutes to boot... don't wait anymore. I can see some specific usage that could benefit of this kind of storage subsystem speed. Even if it is a 'tiny' 64GB ramdrive.
    It may not entirely replace a small SSD, but you can do some neat tricks with that kind of RAM at home. I know only one module is many times more expensive than a SSD, but just the fact that you can do it is remarkable.

    Too bad this chip costs a lot, and IT. IS. HUGE. The thing has the size of a cup-holder, or at least the socket. With that amount of die you could build 2 * i7- 2600k and with the amount of money you blow on one, you can still pay for 3 * i7s.

    Oh yes, check for yourselves. That's your premium profit margin right there.

    This sucker has 435mm2 while the Sandy Bridge 4c has 216mm2. Twice more!
    This behemoth will nick your pockets in $999, when a i7-2600k cuts you $317.

    Nearly 3 times more. More than 3 times in fact. It is almost pi() times more. Wait, it is pi times more expensive, up to the third decimal. Hmm. I bet you are paying for the lost wafer too. Or it is just a wild coincidence. It doesn't perform twice as better, only 50% better, in some benchies. And it is so big that you can almost call it a TILE, not a CHIP. I am betting that on the same die you build 3 * 2600k, you can build only 2 of these and lose the difference. It should squash the competition. It is a bomb.

    Some chip.
    Diminishing returns indeed.
  • Wolfpup - Wednesday, November 16, 2011 - link

    "All of this growth in die area comes at the expense of one of Sandy Bridge's greatest assets: its integrated graphics core"

    Whaaaaat? Greatest assets? It's a waste of space. It should be used for more cache or another core or whatever on the quad version. I can't believe this site...Anandtech of all places...has ANYTHING positive to say about integrated graphics!~
  • noeldillabough - Wednesday, November 16, 2011 - link

    For laptops the integrated graphics is AWESOME however on my gaming machine with top end graphics cards eating space for integrated graphics seems silly.
  • jmelgaard - Thursday, November 17, 2011 - link

    The fact that you still talk about X number of cores shows you haven't understood my posts.

    Your thinking: "How many cores can I make my game utilize"

    My model: "How many small enough jobs of processing can I split my game up into"

    Number of cores have no relevance in modern architectures, while in a Game engine you properly wan't to take control over the execution of those jobs, priority jobs etc.

    The funny thing is, your BF3 already runs on 500+ cores when it comes to the rendering, lighting, polygon transformations and so on... All by chopping the big job of rendering a screen into little bits of work... just like i suggest you can do with the rest of a game, just like we do with so many other applications today.

    "I doubt it. There's a reason why game engines are modified as they get older."

    Almost every single corp only sees ahead to the next budget year...
  • seapeople - Saturday, November 19, 2011 - link

    Of course it's inevitable you would resort to personal attacks and profanity in an argument you are losing.

    It's a different mindset... do you think graphics work is programmed by thinking "Ok, today's GPU's have 500 cores, so let's optimize our game to use exactly 500 threads..."
  • abhicherath - Sunday, November 20, 2011 - link

    Why?Why are those 2 fused off....seriously for a 1000 buck CPU, you don't expect intel to hold stuff back....gosh, this is competition crap. If AMD's bulldozers were powerful as hell and outperformed the i7's i sure as hell expect that those 2 cores would be active....
    what's your opinion?
  • jmelgaard - Sunday, November 20, 2011 - link

    "This doesn't involve a diatribe about number of cores in modern architectures."

    What what?... Do you even know what you are writing anymore?

    What I am talking about is software architecture, which is highly relevant to the discussion.
  • Flerp - Sunday, November 20, 2011 - link

    Even though there are very healthy gains in specific areas, I find the Sandy E to be a bit underwhelming, especially compared to how badly the X58 slaughtered the 775 platforms when it made its debut. I guess I'll be holding on to my X58 platform for another year or so and see what kind of improvements Ivy will bring.
  • jmelgaard - Monday, November 21, 2011 - link

    @rarson

    The whole reason I begin to talk about software architectures is because you are so hell-bend on sticking to your idea of "optimizing to a number of cores", I had to have you realize that you need to let go of that idea, your refusal to do so only makes me hope that you don't actually work with software development. No offence intended, because I would never be fit building a house either, it's not my field.

    If you had ever gotten to understand that, the next discussion would be if it was beneficial to adopt this strategy within games, if it was viable and if it had a ROI that was worth pursuing if you could choose outside the bounds of this years budget, would it be an architecture that might cost us 2 to 3 times to pursue now, but saved us 20% development costs on our next games or engines for the next 10 years.

    Of-course this could be a swing-and-miss if someone revolutionized how we look at our processors, much like they have done with GPU's, but as we have "barely" entered the multi-core-cpu-era, I don't expect this to happen within the next 10 years.

    However that is all irrelevant because 10 years is not the time-frame, it is not even 5 or 3... the time-frame is a year at a time, and the cheapest solution in the time-frame of this years budget, that's the chosen one, that's the reason you are looking for, that is why they do it. And this is how almost, if not all, stock-based corporation operates. Why?... Because they have to satisfy stockholders... There is no other reason or rationalizations behind it.

    DICE it self is not a stock-based company, but it is a fully owned by Electronic Arts, which is. And so EA's financial numbers is directly impacted by DICE as it counts towards EA's assets (not necessarily Revenue though).

    With that, I am done with you, your best argument seems to be "They did it, so that must be the right thing to do"... When was anyone's choice ever evidence of it being the best one?
  • Tchamber - Friday, December 2, 2011 - link

    I think it's funny that so many people hated on AMD for making a 2B transistor count CPU that ran at 125watts, but no one says anything about Intel making 2.27B, and a CPU that runs HOTTER than FX's 125watts. Seems to me there are Intel fanboys too :) I supposed I'm one too, I'm running a Core i7 970.
  • joaopft - Sunday, December 4, 2011 - link

    These SB-E chips have been cobbled from the Xeon chips. Now Intel is cost-slashing on the enthusiast market? They must be insane! This core fused parts are an infamy to any enthusiast. Adding injury to infamy, SB-E's price/performance is below SB. Also, performance/watt numbers for this platform are not good at all. Intel is facing stiff competition from... Intel? No doubt it may be tempeded to kill the 2600K and the 2700K, when Ivy Bridge debuts.
  • danacee - Monday, December 26, 2011 - link

    I am really annoyed by the incredible short sightedness of Anands recent microarchitechture articles. The reviews here at time comes off as pretentious bile trying to forget the fact that the primary readers of this website are hardware enthusiast, not people in charge of the IT department budgets at illustrious companies!

    Would it really kill you to go into a bit more detail of X79 versus X58 era cpus and their per mhz scaling and power consumption? Because I can guarantee you NO ONE who owns an 1155 platform is giving x79 a second look, 90% of people upgrading to this platform are from the quad core X58, or even p35/x38/x48 era and you completely ignored us.

Log in

Don't have an account? Sign up now