"For example, today the fastest LGA-1156 processor is the Core i7 880. When Sandy Bridge launches early next year, the fastest LGA-1155 processor will be the Core i7 2600."
Shouldn't the second one also read LGA-1156? Are they changing the pin count/socket for this 'tock'?
The story of how caches are going to work in the 8+ core world is getting exciting. I like the overview at the daily circuit that summarizes how Niagara 3, Tilera Gx-100, and BlueGene/P processors weigh in on the issue too http://www.dailycircuitry.com/2010/11/progress-in-...
The Apple iMac 21.5inch is a computer machine which uses the power of Intel Core i5-2400. Look at these page: http://www.bestdealscomputers.net/desktops/new-app... Processors like that, thanks to its strength, could draw the attention of everyone, even computer vendors at the level Apple also has without a doubt to hook them.
It's hard to wade through all this data so quickly. That said, as far as overclocking, the new 2011 socket will be the successor to 1366 ?
I hope with all these new overclocking controls there will still be that mainstay $300 CPU that can overclock to some extreme performance. Meaning a successor to the i7 920/930 that can deliver the amazing performance those can overclocked.
I hope this is not the death knell for such a CPU and Intel is expecting us to fork over $1000 for that performance level.
On the 2P Server side of things, I have been told there will be a Westmere v2 coming in January 2011. This is probably the same family that will produce the i7 990 and the other 1366 chips on the chart that don't exist yet. The Xeon 5600 and 970/980 are damn near identical aside from QPI Links.
Being those are being released in Jan, I wouldn't expect to see a socket 2011 desktop part until basically a year from now.
They will once again be a close relative to the 2P Server family. The socket for the 2P Servers will be Socket R and will be Quad Channel memory as well as supposedly having PCIe 3.0.
If it doubles Clarkdale's GPU performance, then it probably will (at least on lower resolutions). I'm getting pretty decent framerates from Clarkdale on 1360x768 Low and I've been able to play on 1360x768 Medium with a Radeon HD 4550. I think Sandy Bridge is probably closer to the latter than the former in performance.
Exactly my thoughts, that the GPU performance looks to be good enough that Apple could use it for the 13" MBP refresh next year. I'll be glad that I decided to wait, that's fur sure.
I was doing some research and if they would have to use the full integrated graphics core, with the 12 cores, to top the performance of the 320M in the current macbook pro 13 I doubt apple would take a step backwards in gfx performance, and use the 6 core integrated gfx.
and the performance would still be pretty close, that the 320M would lose to the inter grated gfx (12 cores) by about 10-13%
and llano is still an option, but I have a feeling it will be a dead heat with this.
Apple has to know a lot of the people buying the laptops are far from high end gamers. The amount of people with 320Ms who don't need them is probably a lot. We'll see how all the different parts of Sandy Bridge work out. Don't the Core iX processors not work with Nvidia Integrated graphics at all?
Correct on NVIDIA IGPs not working with Core 2010 (and presumably beyond). The need the QPI interface and Intel isn't licensing that to them.
As for Apple, one thing to note is that they've started shipping all laptops with GPUs that support OpenCL it seems, so if Sandy Bridge doesn't have that they may do the same Optimus-style setup as current MBP. Not sure what they'd do with the base MacBook in that case, but Apple seems like they're gearing up to start leveraging OpenCL at some point in the near future. Pure speculation, though, so maybe SB's IGP will be enough, even if it's a step down from G320M.
Aside from on the high end (LGA 1366/2011) the bus nVidia needs is DMI, not QPI. If I was nVidia I'd insist on getting rights to both because QPI is more futureproof. Specifically having more than a few high speed SATA6GB/USB3/etc devices will be able to saturate DMI since it's only the equivalent of a PCIe 4x slot (1.0 speed for 1156, 2.0 for 1155/2011) while QPI is a much higher capacity bus similar to AMD's HyperTransport.
While intel seems determined to milk as much out of the (presumably) cheaper to implement DMI bus as it can; sooner or later they're going to either need to mainstream QPI or have the CPU die eat the SATA/USB3 controllers. I find the latter unlikely because it would require cramming even more data lines into the already overcrowded CPU socket area.
Maybe, but IIRC Apple's biggest issue with the Clarkdale platform on smaller laptops was wanting to maintain CUDA support across their entire platform without adding a 3rd chip to the board, not general GPU performance. Unless the Intel/nVidia lawsuit concludes with nVidia getting a DMI license or Intel getting a CUDA license this isn't going to change.
I don't think it has anything to do with CUDA. I mean, they sell Mac Pros with AMD/ATI Cards in them, and they don't support CUDA. It's more of OpenCL and high enough performance. However, just looking at these new performance, I'm willing to say that it'll be the next chip for the MBP 13" easily.
Hmm, they really want all of the systems to have OpenCL? I don't have OpenCL and I don't care at all and I have CUDA but have only used it once. 320M doesn't even have OpenCl does it? Seems like it would be ok for the less expensive ones to have Intel graphics and the higher end ones to have CUDA, OpenCL, and better gaming performance if someone cares about those. They'll keep on upgrading the performance and features of Intel graphics though, who knows.
Nvidia implements an OpenCL run-time by translating OpenCL API calls to CUDA calls. If your card supports CUDA, it supports OpenCL.
The 320M supports OpenCL, and every Apple laptop/desktop that has shipped in the last few years has as well.
A large portion of the motivation for OS X 10.6 (Snow Leopard) was introducing OpenCL support.. along with increasing general performance.
There is a large amount of speculation that OS X 10.7 will take advantage of the OpenCL groundwork that OS X 10.6 has put in place.
Also, in the case that you have a GPU that doesn't support OpenCL (older Intel Macs with Intel IGP graphics), Apple has written a CPU-based OpenCL run-time. It'll be slower than GPU, but the programs will still run. That being said, I highly doubt that Apple will be willing to accept such a performance deficit existing in a brand new machine compared to prior hardware.
It has more to do with nVidia's VP3 PureVideo engine which they rely on for video acceleration. It's as simple as that.
Which is why they only find their place in the notebooks. It's also a low-end gpu with enough performance to say run a source game at low res. And they have more complete drivers for OS X.
Will there be a "cheap"(~$300) 6-core LGA-2011 replacement for i7 920/930 or will Intel limit the 6/8 cores to the high-end/extreme price segment ($500+)?
Questionable overclocking is bad enough, but together with...
"There’s no nice way to put this: Sandy Bridge marks the third new socket Intel will have introduced since 2008."
"The CPU and socket are not compatible with existing motherboards or CPUs. That’s right, if you want to buy Sandy Bridge you’ll need a new motherboard."
"In the second half of 2011 Intel will replace LGA-1366 with LGA-2011."
...it is just terrible!
I'll definitely buy AMD Bulldozer, even if it ends up a bit slower. At least they have some respect for their customers and an ability of forward thinking when designing sockets (actually, Intel probably has it too, but just likes to milk us on chipset purchases also). And I am no fanboy, 4 of my 7 PC's are Intel based (two of those 4 were my latest computer purchases).
There will be i7 processors that require three (3 !!!) different sockets! Maybe even 4 when 2011 comes. Intel can't get their naming right for quite some time now, but they've outdone themselves this time.
Processor names really should mean something, even if AMD and Intel don't agree. It's annoying that I have to wikipedia a processor (or memorize a thousand processors) to know what it is. We are still getting quotes for three year old Opterons and Xeons (that we're using as desktops no less), those only add to the annoyance.
What ends up happening - good for Intel bad for technology advancement - is non IT type people buying computers are buying DDR2-667 based three-year old desktop processors.
Ummm, but Bulldozer comes with AM3-r2... Just a sketchier way of saying new MB needed.
At least this new Intel isn't trying to BS you. Significant revisions to the architecture require different pin layouts/counts... It is inevitable with processor evolution.
"At least this new Intel isn't trying to BS you. Significant revisions to the architecture require different pin layouts/counts... It is inevitable with processor evolution."
They know in advance what they need and could design a socket to support multiple processors. And i7/i5/i3 definitely don't need different ones.
Eh, no its not. Bulldozer does NOT work with non-AM3+ mobos
AMD engineers made a decision not to make it backward compatible for three reasons.
(1) No one but enthusiasts upgrade their CPUs. People in the real world upgrade their whole computer.
(2) Bulldozer introduces new features that won't work with existing Socket AM3 mobos. (Isn't it bloody obvious when they have to introduce a new socket specification?)
(3) It would cost more money and delays if they were to make a backward compatible version of Bulldozer.
As a result, they made a compromise: You can take your existing AM3 CPU to AM3+ mobos, while you wait for Bulldozer to arrive. BUT, you can NOT upgrade your existing AM3 based system to Bulldozer.
Simply put... AM3+ CPU and AM3+ mobo = OK AM3 CPU and AM3+ mobo = OK AM3+ CPU and AM3 mobo = Sorry. No.
So it doesn't matter if AMD "Bulldozer" or Intel "Sandy Bridge". You will need a new mobo.
AMD seriously has their work cut out for them with Bulldozer. The lowest end Sandy Bridge processor absolutely trounced the competition. It's insane what Intel is pulling off here, especially in the integrated graphics arena. Really makes me hope Larrabee comes back as a discrete product in the next few years.
- based on the shown roadmap, the replacement for the i5 760 is actually the i5 2500(K).
- i7 will have even better performance with 8 MB L3 Cache and higher graphics turbo. So there is even more performance potential in the SandyBridge die that Intel could unlock for lower SKUs if needed.
If we go with what Anand has said and use the roadmap to guess pricing I just have one question then:
Why in the world would anyone spend ~$300 for the 2500 and ~$500 on the 2600 then use the on chip gpu with no plans on some kind of discrete?
If the difference between a $600 HP is Llano and Sandy Bridge, Llano has a possibly huge advantage since I think its safe to assume that the gpu side will start at 5450 performance.
Its like Intel would be trying to tell you that SD Xbox 360 is better than HD Xbox 360 (Llano). Are you serious? If Llano can hit a pc at that price point and have a full shader count, Sandy Bridge is dead in the consumer market.
I know that's a lot of ifs and time between here and then but Intel doing what it has always done with graphics (suck) is going to haunt it. I think Intel let the door wide open and its head between it and the frame. All AMD has to do is shut it.
There are people whose workloads are heavily CPU bound but who don't need a heavy duty GPU. Higher end servers and a lot of workstations fall into this category.
Beyond that unless Intel made a GPUless die or deliberately disabled the onboard GPU there's no reason not to include it. While we'll have to wait until Intel shows off labeled die shots I doubt that the GPU is a large enough chunk to justify the engineering effort just to save a little on the manufacturing side.
You are correct but my point was meant to be on "Best Buy" systems and not server or workstations. Sorry if I didn't get that clear.
On the server front this will have to go up against Bulldozer which is an entirely different topic.
While it would be foolish for Intel to make a gpuless die since integration with the cpu side is inevitable, Larabee or what ever better be good. Then there is the driver thing. That Dragon Age Origin picture sure doesn't look right. For drivers that still have work to do, that picture looks exactly like the one from when Clarkdale was released. I'd be a little surprised if much driver work is left if those two pictures are actually different.
I'm never buying an Intel CPU or Motherboard ever again. This is one area that made them what they are today. The ability to take a mid range part and clock it up is what made the Core 2 series such a success with gamers and other performance enthusiasts. Not all of the success is attributed to overclocking, but a good bit of the popularity came from a $200 CPU being able to clock up to levels that the $700+ cpus hit. Now, if the unlocked parts can hit big overclocks and aren't overpriced then maybe it'll work out. However, it's all to easy for Intel to give us the finger and price a $200 CPU at $600 because it's unlocked and say "tough crap, if you want to overclock then pay up!". I am hopeful it doesn't come to this.
Anyway, quads are old news IMO...I'm looking at 6core for my next one.
More speed with less power - it looks like a very competitive product. I really hope that AMD has something up their sleeve with Bulldozer and Bobcat to compete with Sandy Bridge.
17% higher performance is just not exciting. You need to give me 50% improvement at least to make me want to spend $1000 for new CPU/Motherboard/memory.
It really hasn't been all that exciting since Core 2 Quad...
We didn't have the system for long enough to rerun the tests with the 5450 on the H67 board. The 5450 is GPU bound at those resolutions/settings however:
I just ran a sanity check on the Core i7 880 with the 5450, the numbers don't move by more than the normal test margins - the 5450 is totally GPU bound here.
Do you know if any of the benchmarks make use of AVX instructions? Sandy Bridge effectively doubles the maximum throughput for compute-intensive operations like SGEMM and DGEMM. While it might not translate to a 2x speedup in real-world applications, I imagine it should give a significant gain, at least in the HPC field.
Thank you for the quick answer. It would be great to include some software with AVX support in the full review, when Sandy Bridge launches. Probably the Intel Math Kernel Library will be updated in time.
1. i'd like to see some temp numbers. along with, does intel stock hsf actually do the job here? (which they have been getting better at really)
2. i didnt see anything about accelerated hd video playback using the on die gpu?
3. sure these cpu look great from price point performance gain....until you realize you need a full platform upgrade to go along with it...which if we assume mainstream mobo around the 100$ mark and ram to match since they're taking away the bclk deal... and every 2 yrs is a bit too soon for full platform upgrade imo.
4. hardware virtualization parts? i know the current i3 vs i5/7 chips had some stuff disabled. will these SB chips follow the same profile?
5. mobile versions? we know the mobile ones are usually cut back to fit low tdp profile. will the same cuts apply like the current mobile i3/i5 parts (eg, no real quad core parts)? otoh, what about the quad core mobiles? the current i7 mobile quads are laughable at their performance and heat output (i'm looking at you first gen hp envy). do you think these SB quad mobiles will actually be decent?
Wikipedia lists both 2 and 4 core mobile parts. Not definative but they generally do a good job of keeping up with the latest leaks for things like this.
I'm not sure about this, but I seem to recall having read that aes-ni instructions use the GPU, at least partially. Makes sense as the gpu is excellent at parallel tasks. If this is the case, would the 6 EU part perform differently than the 12 EU part at AES?
Any news on when the inevitable Q67 would launch? I guess it's likely that Q67 will use AMT 6.0 as it was a pretty recent upgrade.
With sata III support at launch you'd imagine they'd also support sata III on their gen 3 SSDs. Time will tell I guess.
wow bummer. welcome to the end of intel Bus speed overclocking. I will not be adapting the new sockets unless something happens and intel changes their minds. overclocking is not as easy as switching multiplyers even EE cpu's of nowadays show that. 90% of the high overclocks with EE cpus show that a mixture of multi and bus speed is needed. i sense though that with the higher end socket intel will allow it. if not i think its a very bad move on their part.
I don't think any of the sandy bridge graphics will be able to get to GT 240 levels. This one trades blows with the 5450 as we can see, and just looking at 3DMark06 scores the 5450 scores about 3500 or so, while the GT 240 does maybe 9000 or 10000. If the more powerful sandy bridge graphics can get up to 4000 or 5000 or so that would be great, that would be beating the 9400 GT and closing in on the 9500 GT, not getting to GT 240 levels though. Wonder what the next integrated graphics after this will be like.
I take it this means it will soon be the optimum time to purchase current-gen technology at significantly reduced prices?
Just wanting to build a no nonsense system at slightly below the current price/performance sweet-spot.
Seems Intel are only interested in toying with consumers. They've wasted die space that could've been used for a more capable CPU. How many years have we been chained under 4Ghz frequency? 5 years or so? Nine women can't make a baby in one month! Not every problem is parallelizable - we need greater frequencies/efficiencies.
Now they are locking processors and playing games with the sockets. No USB 3.0!!?
Garbage, No Thanks!!!
Seems you are giving them a free pass Anand. Very convenient timing to steal AMD's thunder, eh!
I love you man - big fan since the beginning, but you should read Scott Wasson over at Tech Report. Those value scatterplots are very helpful to me - these regurgitated press releases, not so much.
Sorry;(
To be so harsh, but we deserve better than these kiddie chips! Only you can hold them accountable for these failures of imagination.
I am a bit disappointed. Seems like since intel is wiping the floor with AMD, decided it was OK to screw us all over with this socket thing. I will still buy an intel processor if AMD has no cards to play, but i wont be pleased.
I agree that single threaded performance is important to keep in mind. Sandy Bridge had a larger ILP boost than I expected. Final silicon with turbo enabled should address that even more.
We got into trouble chasing the ILP train for years. At this point both AMD and Intel are focused on thread level parallelism. I'm not sure that we'll see significant ILP gains from either party for quite a while now.
The socket move is silly, unfortunately there's nothing that can be done about that. AMD takes better care of its existing board owners, that's something we've pointed out in prior reviews (e.g. our Phenom II X6 review).
I'm not sure I'd call Sandy Bridge a kiddie chip however. It looks like it'll deliver great bang for your buck when it launches in Q1 regardless of how threaded your workload is.
Value scatterplots are a great idea, Scott does a wonderful job with them. We're going to eventually integrate pricing data with Bench (www.anandtech.com/bench) which should help you as well :)
I'm guessing USB 3.0 support will be introduced later with a chipset upgrade. Why are you so concerned with GHz when Sandy Bridge delivers more IPC? I think having better IPC instead of more GHz is better as you'll get potentially lower power consumption.
Lets just hope AMD trhows in 80 gpu cores into ontario to bring this SB igp to shame(almost the same performance but less than 10w tdp). And lets also hope they throw in those 400 cores into Llano we have been hearing about.
Sandy Bridge's GPU does not support OpenCL. This is strictly a graphics play, Intel doesn't have an announced GPU compute strategy outside of what it's doing with Larrabee.
Is intel actually still doing anything with Larrabee on the gfx side? I thought they killed it on that end entirely and were looking at it strictly as a compute platform now.
Why not comparing with a HD 5570? That is what Llano is supposed to have, Redwood-class IGP. An HD 5450 is quite pointless. It just reflects competition for Ontario. But Sandy Bridge is not Ontario's competition.
And what about image quality or GPGPU support? Pure FPS numbers are only half of the truth.
dont think so, its said that AMD's Fusion built-in GPU will have 400 SPUs (HD 5670 level-graphics), thats a far cry from HD 5450's 80 SPUs ;)
so if you wanna game you still have to use something from a real graphics manufacture like AMD when it comes to GPUs bult into CPUs, as a added bonus you also have updated drivers and a decade old DirectX 9 compatibility, so you old games work without any big problems
I am impressed that you have a functioning sample at least four months before it's available, run it through enough paces for a review like this, and they let you release the numbers. I mean, are they trying to suppress holiday sales?
When do you think you'll have a Bulldozer sample from AMD to run a similar preview? Barring a surprise from AMD, at this point, it looks like I'll be building an i7 2600 early next year. The similar spec chip from today is an i7-975 Extreme, which is the fastest quad core in the bench, and Sandy Bridge runs 13-14% faster in the only benchmark I care about (x264). I guess even that might change significantly if it can take advantage of this "alleged on-die video transcode engine." I'd not heard of that before.
Honestly we're probably several months out from having Bulldozer silicon in a similar state. With the past few generations of Intel CPUs, by around 4 - 6 months before launch we're usually able to get access to them and they perform very well.
With AMD the lead time is far shorter. I don't expect us to have access to Bulldozer silicon that's worth benchmarking until Q2 2011 at the earliest. I'm more than happy to be proven wrong though :-P
I guess I'm mostly surprised that Intel would do it. Conroe made sense. They had to show the world as early as possible that they had something significantly faster than AMD, suppressing sales of that for their own a little later. But now that they own that performance crown, why show previews so many months early? I suppose I could be over-analyzing it and the vast majority of the market couldn't care less so it makes little difference to their bottom line. Bragging rights simply make for good PR.
Sad to see Bulldozer so far out. I assume the server chips will ship before the consumer ones, too, so it'll be at least a solid year before it could be in my hands, anyway. Oh well. To be honest, my C2D E6400 still does well enough for me. Maybe I'll just make my upgrade an Intel G3 SSD. If I got both that and SB, I don't know what I'd do with myself.
Oh, I had assumed you got this chip from Intel and they had a typical NDA that said when you could talk about what you found. Where'd it come from, then? One of Intel's motherboard partners with whom you have a friendly relationship?
I must say, I'm really grateful for this article. I'm in the middle of planning an upgrade and information like this is really valuable to me. (and I guess to a lot of people as well!) I would just like you to know that your articles actually do influence some of our buying choices. So... Thank you! :D
Now, all I need is a Bulldozer preview and all the pieces are in place...
this sandy bridge review sample here do not have TURBO enabled. The CPU runs at 3.1GHz all the time, regardless of workload as anand stated
it says "Both the CPU and GPU on SB will be able to turbo independently of one another. If you’re playing a game that uses more GPU than CPU, the CPU may run at stock speed (or lower) and the GPU can use the additional thermal headroom to clock up. The same applies in reverse if you’re running something computationally intensive."
QUESTIONS
Q} will the on die GPU unit work in tandem with the other discrete GPUs out there or it will shut off? if yes will it work when sli or crossfire is enabled :p Q} whatever the above statement says will it happen if we use discrete graphics from nvidia or ati? Q} will there be any possibility to disable ONLY GPU and in certain cases ONLY its TURBO FEATURE Q} any possibility to remain the GPU overclocked the whole time when cpu is IDLE Q} what about accelerated hd video playback using the on die gpu? Q} it support VT-x and AVX is it possible for you anand to use specific benchmark for these instructions, same request goes for the AMD Q} as someone asked will there be a cheap 6 => core processor for mainstream market Q} again as per the last comment ......When do you think you'll have a Bulldozer sample from AMD to run a similar preview?
this Ques Must be answered
all n all what i think even if there is a 15-19% perf. Jump its not worh the spending when u consider u have to upgrade the entire platform
and moreover limiting Overclocking features damm! a retarded decision i am not in a mood for amd but if the overclocking hits then i will move 10000...% :angry:
If you're asking about an SLI/CFX pairing with the IGP almost certainly not. The only company to ever attempt something like that has been Lucid with the Hydra chip and the results have been less than impressive. Architecturally I don't know that it'd even be possible for them to try with the on die GPU. The Hydra chip sat between the CPU and the Gfx cards on the PCIe bus and looked like a single card to the OS. There's no way for them to insert themselves into the middle of the connection to the IGP.
Hmm, based on the roadmap I actually think the i7-2600K will be priced close to the i7-875K. The i7-950 is supposed to drop to $294 next week putting it in the high end Mainstream price range (it'll still be Q3'10 then). Also, all the $500+ processors are in the Performance category (i7-970, $885; i7-960, $562; i7-880, $562).
If the i7-2600K goes for $340 or thereabouts, I can already see supply shortages due to high demand (and the eventual price gouging that would follow).
Right now all desktop parts have 6 EUs, all mobile parts have 12 EUs. There are no exceptions on the mobile side, there may be exceptions on the desktop side but from the information I have (and the performance I saw) this wasn't one of those exceptions.
That seriously doesn't make sense. Couple of possible scenarios then.
-Performance isn't EU bound and 2x EUs only bring 10-20% -The mobile parts are FAR faster than desktop parts(unlikely) -The mobile parts do have 12 EUs, but are clocked low enough to perform like the 6 EU desktop(but why?) -There will be specialized versions like the i5 661
Actually I think it does. Regardless of if they 6 or 12EU's it's still not going be a replacement for any but the bottom tier of GPUs. However adding a budget GPU to a desktop system has a fairly minimal opportunity cost since you're just sticking a card into a slot.
Adding a replacement GPU in a laptop has a much higher opportunity cost. You're paying in board-space and internal volume even if power gating, etc minimizes the extra power draw doubling the size of the on die GPU will cost less than adding an external GPU that's twice as fast. You also can't upgrade a laptop GPU later on if you decide you need more power.
Do you think Intel will be sharing preliminary performance/pricing data on LGA 2011 by the time that the first LGA 1155 parts start shipping? I'm on 1366 now and would like to know if staying on the high end platform will be a reasonable option or if there isn't any point in holding off for another 6 months on my upgrade.
1) No USB3 - Major FAIL. Putting USB3 in an Intel chipset will drive huge adoption rates rather than this limping in BS by manufacturers today. Not to mention that for Hard Drives, USB2 has been a bottleneck for a long time whereas only top end SSDs today are maxing out SATA3
2) 2 chips with Quad Core and no HT that are identical except for Clock speed and one of them is essentially the 400 and the other is the 500? WTF? Call them the 2410, 2420, 2430, etc. That gives you like 8 or 9 speed bins for that family. Whomever is doing the numbering at Intel needs a swift kick to the head to get them back on track mentally as things just get more and more confusing. You have the i3/i5/i7 today, why not just change it to: i2 = Dual Core no HT/Turbo i3 = Dual Core with HT and/or Turbo i4 = Quad Core no HT/Turbo i5 = Quad WITH i6 = Six without etc As it stands now we have i5 with both dual and quad core and i7 with 4 and 6. just doesnt make sense.
That's quite the IPC improvement there. Not quite Netburst to Core 2 but a lot more than I expected (I was expecting something on the order of 5%, with most gains coming from ramping clocks with the extra headroom of 32nm).
Question is, do I want the i5-2500K more than I loathe Intel's motherboard department? I'm seeing them bring out new sockets almost as often as new processor families, which really, really does not make me confident in the socket's future.
I will wait at least for Bulldozer benches before buying whatever makes sense at that time (okay, probably weighted in AMD's favor). I've lasted 4 years on this Pentium D, I can live another half of one.
Why do some people still compare Netburst vs. Core 2? The Pentium 4 generation was a clock speed focused that that FAILED to realize its clock speed potential so it looked really bad compared to Core 2.
Compared to Core Duo Core 2 was only 15-20% faster. Sandy Bridge manages to do another 20%, which is really good in a generation, yea?
Your excellent article was exciting to read. Thank you!
I noticed a small typo on the Windows 7 Gaming Performance page in the first line under the Data Recovery chart : "Clock for clock...to the i7{5} 760..."
I think that the integrated graphics here are a game changer. Sure nobody will look to them for serious gaming, but finally they're at a point where if you buy any CPU you will be able to play most games, even if at low settings. I'll be looking forward especially to the mobile CPU's. With Bobcat around the corner, I guess next year we will finally see mainstreams notebooks become capable of some game playing, which will be great (and bad for NVIDIA).
What I'd like to see is something like Nvidia's Optimus make it to the desktop. With both AMD and Intel going for on-chip integrated graphics the market is practically begging for a unified standard for graphics switching.
The next-generation IGPs look to be competent enough for anything but high-end gaming, which means I should be able to power down my discrete graphics card completely most of the time. The end result would be significant reductions in noise generation, power usage and heat emissions.
Having discreet graphics cards reduced to basically connector-less, slot-in cards for on-demand co-processing seems the logical step.
Hi there.. I`m currently a freelance 3D generalist.. and I was going to upgrade my old Core 2 Quad QX6700 with a Core i7 980X. But now i`m not that confident. Sandy bridge looks amazing, I was sad seeing the new socket for sandy bridge, it does not compell me to buy a new motherboard now... does anyone know if the 1366 socket will stick with the nex gen High end market? I dont want to shoot myself in the foot here.
As noted in the Intel roadmap in the article, for at least part of 2011 they will be sticking with 1366 for the release of the Core i7 990X (to replace the 980X). However, after that the Intel performance platform will switch over to socket LGA-2011. Here is a quote from the articlea (page 3).
"Original Nehalem and Gulftown owners have their own socket replacement to look forward to. In the second half of 2011 Intel will replace LGA-1366 with LGA-2011. LGA-2011 adds support for four DDR3 memory channels and the first 6+ core Sandy Bridge processors."
Yeah.. as I said "Sorry dind`t read the last frase..." but thx anyway.. It`s a shame to be always changing sockets, but probabily a necessity to evolve the technology.
Having the first chips target the mainstream market is a very smart move by Intel because that's where AMD makes it's money. I'm honestly not impressed by the performance numbers, but I am impressed by the overall performance, power consumption, and pricepoints for these next gen CPUs. What I'm really looking forward to is the performance segment of Sandy Bridge.
Not sure I agree with that. From AMD's own figures, Bulldozer is significantly faster than STARS. It would be more realistic to expect Bulldozer to perform closely to Sandy Bridge, however we really need more benchmarks before we get a true idea. Bulldozer looks great on paper, but that's virtually all we have so far.
In any case, you compared Bulldozer to "4C Sandy Class", which would be an 8-thread Sandy Bridge, and thus - at least relatively - high end. And I'm not getting into the core/module argument again... ;)
What I wanted to point out is that Intel sees the 4C Sandy as a "mainstream" part. Reason being they are moving HUGE amounts (compared to AMD) of $150-$250 parts.
On the other hand, AMD sees the mainstream at $100-$200 and that is a Llano market.
For AMD, Zambezi is high-end that justifies discrete GPU.
And Yes, Bulldozer 8C should compare with 4C Sandy favorably, (it would mostly go to pricing).
Seems to be Intel is slowly locking up the overclocking scene because it has no competition. If so, and Intel continues in that direction, then it would be a great chance for AMD to win back overclocking fans with something that just isn't locked out in the same way.
Looking at the performance numbers, I see nothing which suggests a product that would beat my current 4GHz i7 860, except for the expensive top-end unlocked option which I wouldn't consider anyway given the price.
Oh well, perhaps my next system will be a 6-core AMD.
Intel has already announced that shipments for revenue will occur in Q4 of this year. So, January launch.
They've also commented that Sandy Bridge OEM demand is very strong, and they are adjusting the 32nm ramp up to increase supply. So January should be a decent launch.
Not surprising-- these parts have been in silicon since LAST summer.
Since you didn't get this chip directly from Intel , i suspect there were no reviews guideline for you to follow, like which test to run and what test not to run etc.
Therefore those benchmark from Games were not a results of special optimization in drivers. Which is great, because drivers matter much more then Hardware in GPU. If these are only early indication of what Intel new GPU can do, i expect there are more to extract from drivers.
You mention 2 Core GPU ( 12 EU ) verus 1 GPU ( 6 EU ), Any Guess as to what "E" stand for? And it seems like a SLI like tech rather then actually having more EU in one chip. The different being SLI or crossfire does not get any advantage unless drivers and games are working together. Which greatly reduces the chances of it working at full performance.
It also seems every one fail to realize one of the greatest performance will be coming from AVX. AVX will be like MMX again when we had the Pentium. I cant think of any other SSE having as great important to performance as AVX. Once software are specially optimize for AVX we should get another major lift in performance.
I also heard about rumors that 64bit in Sandy Bridge will work much better. But i dont know if there are anything we could test this.
The OpenCL sounds like a Intel management decision rather then a technical decision. May be Intel will provide or work with Apple to provide OpenCL on these GPU?
You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??
If Intel ever decide to finally work on their drivers, then their GPU will be great for entry levels.
Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU. Is GPU memory bandwidth limited?
Any update on Hardware Decoder? And what about transcoding part?
Would there be ways to lock the GPU to run at Turbo Clock all the time? Or GPU gets higher priority in Turbo etc..
How big is the Die?
P.S - ( Any news on Intel G3 SSD? i am getting worried that next Gen Sandforce is too good for intel. )
"You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??"
PCIE 2.0 included other low level protocol improvements in addition to the doubled clock speed. Intel only implemented the former; probably because the latter would have strangled the DMI bus.
"Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU."
Probably. The performance gains vs the previous generation isn't that large and it was enough for anything except pathological test cases (eg memory benchmarks). If it wasn't there'd be no reason why Intel couldn't officially support DDR3-1600 in their locked chipsets to give a bit of extra bandwidth.
Could you please clarify and expand on this comment please? Is this true for all Intel chipsets that claim support for PCIe 2.0?
[q]The other major (and welcome) change is the move to PCIe 2.0 lanes running at 5GT/s. Currently, Intel chipsets support PCIe 2.0 but they only run at 2.5GT/s, which limits them to a maximum of 250MB/s per direction per lane. This is a problem with high bandwidth USB 3.0 and 6Gbps SATA interfaces connected over PCIe x1 slots. With the move to 5GT/s, Intel is at feature parity with AMD’s chipsets and more importantly the bandwidth limits are a lot higher. A single PCIe x1 slot on a P67 motherboard can support up to 500MB/s of bandwidth in each direction (1GB/s bidirectional bandwidth).[/q]
If this is true, current Intel chipsets do not support PCIe 2.0 as 2.5GT/s and 250MB/s is actually the same effective bandwidth as PCIe 1.1. How did you come across this information? I was looking for ways to measure PCIe bandwidth but only found obscure proprietary tools not available publicly.
If Intel chipsets are only running at PCIe 1.1 regardless of what they're claiming externally, that would explain some of the complaints/concerns about bandwidth on older Intel chipsets.
OK it seems as if you were referring to the PCIe lanes connected off the actual P67 chipset, not the native PCIe controller integrated into the CPU. I do recall the P55 chipset supporting PCIe 2.0 but limiting it to PCIe 1.0 bandwidth for interconnects like USB or SATA controllers.
Overall it looks like Sandy Bridge is a disappointment. One really has to question why Intel has reversed their Tick Tock cadence this time around by launching their low-mid range parts and platform so soon on the heals of P55/Lynnfield/Clarkfield, but I guess it makes more sense in the light of the fact Intel delayed that platform's launch for nearly a year. I would be EXTREMELY disappointed if I bought a P55 board in the last year only to find out Intel is again requiring a platform/socket change for what appears to be a marginal upgrade.
There's also some clear deficiencies and disappointments in terms of improvements over last-gen platforms with P67:
1) No additional L2/L3 cache, in some cases less than previous gen. 2) No native USB 3.0 support. One can conjure up myriad reasons for why Intel is resisting USB 3.0 adoption, but its clearly obvious at this point that they have been resisting it since its inception. 3) Limited SATA 6G support. 2/8 ports I believe, but still better than nothing I suppose. 4) No additional PCIe lanes or PCIe 3.0 support, but at least they're finally going to support their actual PCIe 2.0 rated specs? 5) Limited/reduced overclockability. Big mistake imo, Intel seems to be forgetting why AMD was the enthusiast's choice back in the early Athlon/P4 days.
That leaves us with the major improvements:
1) 5-15% improvement clock-for-clock and core-for-core compared to older Nehalem and Westmere architectures. 2) Lower TDP 3) 2x faster GPU that's still too slow for any meaningful gaming
Hopefully the high-end X58 replacement platform offers bigger improvements. There's also some question as to whether LGA2011 will be HPC/server only and an intermediary platform (LGA1355) is to replace LGA1366, however, early rumors show it will introduce or improve upon many of the deficiencies I listed with P67 and show us what Sandy Bridge is really capable of. Get rid of that extraneous, massive GPU on the high-end and replace it with more L3 and execution units and we'll see some bigger gains than the underwhelming 5-15% we see with this first version of Sandy Bridge.
Remember, that 5-15% clock-for-clock increase includes turboboost functioning on the current processors, which generally ratchets up the clock speed even in heavily multithreaded loads. It looks like the IPC increase with Sandy Bridge is at least 20% here. I would consider that fairly significant considering that Intel is already on top of the market with no real competition, other than for AMD to sell its top-of-the-line CPU's for cheap.
It's also weird to see people deride IGP improvements that double the performance of the previous version. These integrated graphics are sufficient for probably 85% of the market (pretty much everyone who doesn't need to play current high-end games). Basically, the majority of people will be getting a free $50 graphics card built in to their processor, which itself is giving you a 20-40% performance improvement over a similarly priced last-gen processor.
Yeah I actually factored Turbo Boost not working on Sandy Bridge, as otherwise it would probably be closer to 0-10% increase clock-for-clock. Anand pegs SB ~10% faster overall clock-for-clock in his conclusion with another 3-7% with Turbo.
Also, tempering any excitement over that 10% IPC increase we have the very bad news about Intel limiting overclocking significantly, so for virtually anyone who already owns a P55/Lynnfield/Clarkfield combo anything but a "K" designated chip may actually be a downgrade as you won't be able to easily enable your own homebrewed "Turbo" any longer with most Sandy Bridge SKUs. I'd say the nearly guaranteed 30-40% OC you lose far outweighs the prospective 10-15% clock-for-clock gain you'd see with Sandy Bridge.
As for the IGP being sufficient or any great accomplishment with what Sandy Bridge brings...I'd disagree. Sure I guess its great news for Intel that SB is actually able to adequately accelerate 1080p, but its still far from replacing even mid-range parts from 2-3 generations ago. If Anand perhaps ran some benchmarks at resolutions and settings people actually used it might be more relevant but the fact of the matter is, ~80% of "gamers" are gaming at resolutions of 1280x1024 or higher according to Steam Survey: http://store.steampowered.com/hwsurvey/
My issue with the IGP is its going to take up significant die space, I estimate at least as much die area for the 2C IGP relative to the rest of the 4C CPU using Clarkdale as a guideline. For those who have no interest in an IGP or go with the P67 platform that doesn't even support it, that's a waste of die space that you're still absorbing and paying for.
I just find it amazingly ironic how times have changed where the CPU was once thought of as the "general purpose" ASIC and the GPU was the "fixed-function", inflexible ASIC. How times have changed. With Sandy Bridge, we now have the CPU, an on-die IGP, and now even talk of an integrated super-sekret hardware video transcoder! Roles have clearly reversed as the CPU becomes ever-increasingly segmented and specialized while the GPU continues to evolve toward general purpose flexibility.
In that sense, I really think AMD has the right approach with Fusion, as their ALU and FPUs will be shared on their Bulldozer and Bobcat designs rather than segregated and specialized like on Sandy Bridge with its single-purposed CPU cores and IGP EUs.
80% of steam users is not the same thing as 80% of total PC buyers, or even 80% of the total gamers (think facebook games, etc). Serious gamers are not, any more than overclockers, a core market for Intel or AMD's CPU divisions.
Yes I'm well aware Steam users do not make up 100% of the total PC market, but I would say it is a fair representation of the kind of hardware and resolution actual gamers use. In those same browser-based games you're referring to, any existing IGP would have been adequate but that's clearly not the market Intel is trying to entice or the point of the comparison, buyers who would otherwise choose discrete GPUs.
As you can see, most of these users are not using Intel IGPs (only 7%) because they are inadequate for actual gaming at the resolutions ~80% of them game at, 1280x1024 or higher, so benching a bunch of games at 1024x768 and trying to pass off this new IGP as adequate tells me nothing as its not indicative of real world applications.
Also, I'd take this a step further and argue the vast majority of those buying one of these new Sandy Bridge processors and systems would opt for a much higher resolution than even 1280x1024, as the most common desktop resolutions available for purchase today are going to be wide aspect 1680x1050, 1920x1080, and 1920x1200 displays. When's the last time you were able to buy an OEM build with a 1024x768 native display or even a 4:3 or 5:4 display for that matter?
If Intel and AT want to pass this IGP off as an HD gaming solution to rival discrete solutions, bench some resolutions and settings people would actually expect to game at.
No, the 10% average outperformance in this review (see the conclusion) is against the i7 880 which has been allowed to turbo.
Anand uses "clock-for-clock" to distinguish that part from the "same price replacement" the i5 760.
So it achieves 10% average outperformance against a part that runs ~20% faster on single-threaded loads, ~15% faster on 2 threads... down to a bin or so of turboing on fully-threaded loads.
That puts the clock/clock performance improvement at around 20%, and this is not including AVX / hardware transcoding.
Yes the i7 880 is the basis for the clock-for-clock comparisons to come to 10% increase, with Turbo on SB he expects another 3-7% increase which is again, in-line with my estimate of 5-15% instead of 0-10% gain, clock-for-clock with and without Turbo.
From the conclusion verbatim: "Sandy Bridge seems to offer a 10% increase in performance. Keep in mind that this analysis was done without a functional turbo mode, so the shipping Sandy Bridge CPUs should be even quicker. I'd estimate you can add another 3 - 7% to these numbers for the final chips."
In almost all of the benches in the test, you are going to be limited to 1 or 2 Turbo bins max which is why Anand limited his estimates to 3-7%, because all of the tests will be using more than 1 core. Under the same tests the benefits of Turbo for both Lynnfield and SB are going to be the same assuming the final Turbo bins and throttling is also the same. So if Lynnfield only gets 1 bin at 2+ cores then SB would only get the same benefit, which is where I'm sure Anand based his estimates (100/3100 and 200/3100).
Simply put, a 15% or even 20% clock-for-clock increase after 2 years from a new architecture is underwhelming imo, especially considering everything else they've left out, but I guess this is what we've come to expect and be thrilled about in a market dominated by Intel without any real competition. Sorry, I'm just less than impressed, especially given the artificial restrictions Intel plans to place on overclocking, further reducing any IPC benefits from SB compared to Lynnfield.
If you throw out Netburst, which was a significant decrease in IPC from Pentium III, when have we had significantly greater than 20% IPC increase within 2 years for an architecture? I understand your other complaints (although I don't see what's wrong with just buying the K models, which all indications suggest won't be much more expensive), but what were you really expecting in IPC increases? 40%? 60%?
Netburst was a reduction in IPC but a tripling of clockspeed compared to P3, but surely you aren't forgetting the incredible gains in IPC from Netburst to Yonah (Core) and Conroe (Core 2)?
Conroe effectively increased performance 100% clock-for-clock from P4 (or 50% or so from Yonah), as it offered some 50% better performance at 50% lower clockspeeds compared to Netburst. While I certainly don't expect that kind of revolutionary product every 2-3 years, we're not even close to that kind of gain in the 4-5 years since Conroe was introduced with not even that much aggregrate difference from Conroe/Penryn/Nehalem/Westmere to SB. From Conroe to SB, clock for clock, we're maybe looking at 50% improvement?
That's 2 full Tick-Tock cycles signaling Moore's Law is clearly dead to Intel when it comes to performance, they only loosely follow its cadence in terms of refreshes, die sizes, transistor counts and fab processes. In order to achieve those kinds of gains, they had to redesign their CPU from nearly the ground-up to compete with AMD, which had the performance lead at the time. Intel clearly hasn't felt the need to improve or innovate signfificantly since then as AMD is essentially 2 generations behind still in performance, about on par with their Penryn offerings at this point.
So you're saying that integrated graphics should either be able to handle high resolution gaming using at least medium settings on the upper echelon of current games or they should not be included? That's fairly narrow minded. The bottom line is that most people will never need a better graphics card than SB provides, and the people who do are probably going to buy a $200+ graphics card anyway and replace it every summer, so are they really going to care if the integrated graphics drive the price of their $200 processor up by $10-20? Alternatively, this chip is begging for some sort of Optimus-like option, which will allow hardcore gamers to buy the graphics card they want, AND not have to chew up 100W of graphics power while browsing the web or watching a movie.
Regardless, for people who aren't hard core gamers, the IGP on SB replaces the need to buy something like a Radeon HD 5450, ultimately saving them money. This seems like a positive step to me.
No, I'm saying if this is being advertised as a suitable discrete GPU replacement, it should be compared to discrete GPUs at resolutions and settings you would expect a discrete GPU to handle and not IGPs that we already know are too slow to matter. 1024x768 and all lowest settings doesn't fit that criteria. Flash and web-based games don't either, since they don't even require a 3D accelerator in order to run (Intel's workaround Broadcom chip would be fine).
Again, this card wouldn't even hold a candle to a mid-range $200 GPU from 3 years ago, the 8800GT would still do cartwheels all over it. You can buy these cards for much less than $100, even the GT240 or 4850 for example have been selling for less than $50 after MIR and would be a much more capable gaming card.
Also, you're badly mistaken if you think this GPU is free by any means, as the cost to integrate a GPU onto SB's die comes at the expense of what could've been more actual CPU....so instead of better CPU performance this generation, you lose that for mediocre graphics performance. There is a price to pay for that relatively massive IGP whether you think so or not, you are paying for it.
Actually it sounds like you don't know what you're talking about or you didn't read the article:
"Only the Core i7 2600 has an 8MB L3 cache, the 2400, 2500 and 2600 have a 6MB L3 and the 2100 has a 3MB L3. The L3 size should matter more with Sandy Bridge due to the fact that it’s shared by the GPU in those cases where the integrated graphics is active. I am a bit puzzled why Intel strayed from the steadfast 2MB L3 per core Nehalem’s lead architect wanted to commit to. I guess I’ll find out more from him at IDF :)"
You might've missed it very clearly stated in the tables also that only the 2600 has the same 8MB L3 or 2MB per core with previous 4C like Bloomfield/Lynnfield/Westmere/Clarkdale. The rest have 6MB or 3MB, which is less than 8MB or 4MB L3 used on the previous generation chips.
This may change with the high-end/enthusiast platform, but again, the amount of L3 cache is actually going to be a downgrade on many of these Sandy Bridge SKUs for anyone who already owns a Nehalem/Westmere based CPU.
990x is a Gulftown part on 1366 that's 130MHz faster than the 980x.... will cost $1000 and come out the same time as the 2600 (which will cost ~ 1/2 and deliver 90% of the performance) and at most a couple months before the i7-2800K which will cost less and trounce it performance-wise.
You'd have to REALLY want those extra cores to buy a 990x on a lame-duck socket at that point!
Anand, can you provide some more info on what the system configuration was when running the power tests? The test setup lists 2 vid cards and it's not clear which was used when deriving the power graphs. Also, what PSU was used? Just wondering since if it was a 1200W behemoth, then the 63W idle might really be 30W on a more reasonable PSU (assuming no vid cards)... As always, thanks for the article!
No USB3.0 support and a half baked SATA3 implementation. I could be a bit too harsh about the latter (can't say if SATA3 on a 6 series chipset will perform poorly or not) but why are they going with only 2 6Gb/s ports? I understand that most people are likely to be buying only 1 or so SSDs in the near future but what about in a few years when these things become mainstream? At least AMD took SATA3 seriously even if they couldn't quite make it work initially (we need a follow up on the 8 series chipsets' SATA performance!)
Not only are Intel overlooking advance in technologies other than CPUs (which are important to most consumers, whether they are aware of it or not) but are also denying other companies who might have more focus in those areas. I wonder if Nvidia or someone else bother to release a chipset for Intel's latest and greatest.
I bought basically what is an i5-750 based on Anand's review here. Or at least, the Xeon version with hyperthreading (needed ECC RAM).
From what I can tell, you get about a 20%-30% improvement over the i5-750, with the same power consumption. That's pretty good. Not only that, you get some competent entry level graphics... which would have good open source drivers. That's somewhat exciting, though I wonder whether it would do multiple monitors. Any idea on that Anand?
Maybe I'll just stick to the cheapest Nvidia discrete cards I can buy, a couple G210s do the trick (to get 4 1920x1200 monitors). Unless Intel can make those G210s redundant, it represents just an incremental bump in performance, as the only thing that is of interest is the increase in CPU. One thing that is nice is that Intel is reputed to have the best open source support for their GPU drivers, which makes things really interesting now they are producing stuff that will compete with the entry level discrete market. It could be really good for Linux/BSD people like myself.
The other thing of interest for me is in the low power, low cost, high numbers of SATA connections space, with ECC. I wonder if Bobcat will have something there, as AMD don't seek to arbitrarily differentiate their markets like Intel does with the ECC RAM.
Also not really sure what the big thing is with the motherboards and same CPU. I tend to keep the same computer as a build. By the time you want to upgrade the CPU, there is invariably other stuff that needs upgrading, e.g. USB3, graphics, SATA, RAM, whatever. So you end up wasting the old parts for not that much benefit. Better to just re-purpose the old machine, and when you have enough money, buy the most performant parts that are still good bang for buck. A good example was the i5-750 about 8 months ago or so. So I don't fault Intel for this.
Intel's going after the mid range market, where most of the money is. We'll have to wait and see how good AMD's Fusion mid range ends up being. Even if it catches up all the way and achieves performance parity so AMD can make more money by raising prices, Intel would have their newest gen on the market first. Fusion had better be really, really, good...
I'd say if that 2500K is 215 or less it'd be a fair buy. I'd still wait for the price to drop below 200 cause that's my absolute cap on a CPU. I am a little annoyed that it doesn't have hyper threading though, from a moral standpoint, I mean, from a raw material standpoint how much does adding hyperthreading cost? nothing! yeah, that's what I thought.
Those are some impressive integrated graphics. I've thought this for a while now, but we really don't need a card any lower than the HD5670, and maybe the 5650 in discrete graphics. Prefferably just the 5670 though. If mobo makers start setting aside a single DDR3 slot for the integrated GPU to use and dedicated GPU only memory, like a discrete GPU, so the integrated GPU doesn't have to share system RAM we really won't need low end graphics in laptops at all anymore.
5450 is the LOWEST END card from 2008, A facelifted 4350. And 780G, it FINALLY manages to out-pace, in 2011, is the mainstream part of 2008 too. In 2011 there will be a 10W Ontario with 5450-class GPU on 40nm bulk ...
On the other hand it seems Intel is thaking the GPU side seriously. Finally.
I would have liked to have seen a better comparison when it comes to idle power consumption? How much has it improved since moving to a 45nm->32nm GPU?
Also, has Intel addressed the Clarkdale issue of not outputting industry standard 24fps? (23.976 hz)
I was planning on purchasing an i5-760 in 2 weeks but looks like I'll have to settle for 2nd hand, low end parts instead and wait for the i5-2400's release.
Great job Intel. I for one no longer have that much interest in overclocking when I have Turbo boost to compensate for that.
Intel could have hit one out of the park with this one if it worked on existing s1155 motherboards....unfortunately it doesn't and it screws over everyone who bought into s1156 or s1366.....yet again.
Hi everyone i've read the preview and i am not so impresed by the performance of SB.The IGP is great but it make sense only for the mobile section of the pc not the desktop!! From the preview i understand that this is not a real fusion product but an evolution off clarkdale and arandale products. So i will wait for Llano to see what Amd has to offer!!!
The GPU is on the same die, So depending on what you mean by true "Fusion" product. It is by AMD's definition ( the creator of the tech terms "Fusion" ) a fusion product.
You get 10% of IPC on average. It varies widely from 5 % to ~30% clock per clock.
None of these Test have had AVX coded. I am not sure if you need to recompile to take advantage of the additional width for faster SSE Code. ( I am thinking such changes in coding of instruction should require one. ) AVX should offer some more improvement in many areas.
So much performance is here with even less Peak Power usage. If you factor in the Turbo Mode, Sandy Bridge actually give you a huge boost in Performance / Watts!!!
It sounds like intel has a home run here. At least for my needs. Right now I'm running entirely on core 2 chips, but I can definitely find a use for all these.
For my main/gaming desktop, the quad core i5s seem like theyll be the first upgrade thats big enough to move me away from my e6300 from 4 years ago.
For my HTPC, the integrated graphics seem like theyre getting to a point where I can move past my e2180 + 9400 IGP. I need at least some 3d graphics, and the current i3/i5 don't cut it. Even lower power consumption + faster CPU, all in a presumably smaller package - win.
For my home server, I'd love to put the lowest end i3 in there for great idle power consumption but with the speed to make things happen when it needs to. I'd been contemplating throwing in a quad core, but if the on-die video transcoding engine is legitimate there will be no need for that.
Thats still my main unanswered question: what's the deal with the video encoder/transcoder? Does it require explicit software support, or is it compatible with anything that's already out there? I'm mainly interested in real time streaming over LAN/internet to devices such as an ipad or even a laptop - if it can put out good quality 720-1080p h264 at decent bitrates in real time, especially on a low end chip, I'll be absolutely blown away. Any more info on this?
I do understand some complains, but Intel is running a business and so they do what is in their best interest.
Yet, concerning USB 3 it seems to be too much of a disservice to the costumers that it should be in, without any third party add-on chip!
I think it is shameful of them to delay this further just so that they can get their LightPeak thing into to the market. Of which I read nothing in this review so I wonder, when will even that one come?!
I can only hope AMD does support it (haven't read about it) and they start getting more market, maybe that will show these near sighted Intel guys.
I'm really interested to see how Intel is going to price the higher of these new CPU's, as there are several hurdles:
1) The non-K's are going up against highly overclockable 1366 and 1156 parts. So pricing the K-models too high could mean trouble.
2) The LGA-1356 platform housing the new consumer high-end(LGA-2011 will be server-only) will also arrive later in 2011. Since these are expected to have up to 8 cores, pricing the higher 1155 CPU's too high will force a massive price-drop when 1356 arrives.(Or the P67 platform will collapse.) And 1366 has shown that such a high-end platform needs the equivalent of an i7 920 to be successful. So pricing the 2600K @ $500 seems impossible. Even $300 would not leave room for a $300 1356 part as that will, with 6-8 cores, easily outperform the 2600K.
It will also be quite interesting to see the development of those limits on overclocking when 1356 comes out. As imposing limits there too, could make the entire platform fail.(OCed 2600K better then 6-core 1356 CPU for example.) And of course AMD's response to all this. Will they profit from the overclocking limits of Intel? Will they grab back some high-end? Will they force Intel to change their pricing on 1155/1356?
@Anand:
It would be nice to see another PCIe 2.0 x8 SLI/CF bottleneck test with the new HD 6xxx series when the time comes. I'm interested to see if the GPU's will catch up with Intels limited platform choice.
I'm disappointed that you didn't test it against 1366 quads. The triple channel memory and a more powerful platform in general have a significant advantage over 1156, so a lot of us are looking at those CPUs. Especially since the i7 950 is about to have its price reduced.
A $1000 six-core 980X doesn't really fit in there, since it's at a totally different price point.
I was all for the 1366 as my next upgrade, but the low power consumption of Sandy Bridge looks very promising in terms of silent computing (less heat).
What do you think the Core i7 980x uses? An LGA 1366 socket with triple channel memory support. So what makes you think that the Core i7 950 is going to perform any diff?????
ok and the diff in performance would be what now? if they are showing you the diff and how well the new 2 gen cpu's are to even a $1000 cpu what makes you think that the Core i7 950 which is slower in performance then a 980X would fair? I mean it's common logic that if the 2nd gens can run almost on par in many bench test with a 980X then obv it's going to run better then the Core i7 950's.
I have been living with an AMD Athlon XP 1800+ since 2003ish. This was mostly because I liked the Soundstorm that did a very good Dolby Digital Live output. For the last eight months I've been having to run it at about 2/3rds speed because all the caps on the motherboard burst, and it ran at 80C all the time. The GPU fan died and I wired a 80mm fan on top of it, but it had overheated once too often to do any 3D work. The DVD burner wouldn't read or write, the DVD reader wouldn't open except under duress. The SATA bus started to scramble any data read or written through it, the second LAN port (the good one) died, and the USB would usually demand a musical chairs routine with the mouse and keyboard to get them to work.
So last week I bought all the bits and built a very reasonably priced (370 with shipping and tax) i3-530 based HTPC. I've never seen anything so gorgeous as the first time I played Avatar in 1080p on the plasma.
There is a technological reason to bury 2 sockets that are still alive? they are screaming performance yet! i dont get what intel wants with this behavior, ¿Hate from the IT sector? i love the performance, but it is designed in a so closed and trickery way, and completely dropping two nice and stablished platforms, this thing wants to be hated. I hope amd destroys this crazy ideas of intel with llano oem sales, even being inferior in cpu performance.
I understand what the difference between unlocked, regular, and power saving CPUs are. But what exactly does Intel mean by a Lifestyle processor? How is it different from the others? What exactly is a "Lifestyle CPU"?
I will be buying one of these the day it comes out.
The only question will be between whether I get a CoreI5 or the Corei7. It will depend on price I guess, as the max I am willing to spend on a i7 CPU is $250.
I've often wondered why people don't use WoW to test their video performance in the computers they are testing, and the obvious occurred to me - it so much depends on where you are and what the population is in the area you are in, that the frame rates vary widely. I imagine the frame rates reported here were for an area like Durotar with no one else in sight, heh. It would be a good place in terms of consistency, anyway, though less taxing that somewhere in Storm Peaks.
WoW is often described as a CPU-intensive game, and so a great game to be included in tests of CPUs like you are doing here. Thanks for including it! I hope it is used for more video card tests as well; WoW may not be the most taxing test bed at lower end video, but at upper end in some areas it can hit 4 GHz i7 based Crossfired systems hard. I like playing at 85 Hz everywhere in the WoW universe I go - and Cataclysm will bring new video challenges, I'm sure.
I'm a bit disappointed at Intel's attempt to completely lock us out of over clocking all together. But maybe this is AMD's chance to win back the enthusiast market. If AMD sold only unlocked parts, they would have a market segment all to themselves...
OK, didn't see it in the article and don't really feel like wading through 200 comments. What I want to know is will we be able to either A) disable the onboard graphics if we have the latest and greatest bad-ass video card...or even better, B) Will it be able to run both at the same time in a configuration where when I'm doing just generic web surfing, emailing, etc, the Intel GPU is doing the work and the discrete card can power down (quieter and less heat generated), and then when I fire up a game, the discrete powers up and the onboard powers down?
Intel is screwing over minorities! Colorblind people unite!
"Both H67 and P67 support 6Gbps SATA, however only on two ports. The remaining 4 SATA ports are 3Gbps. Motherboard manufacturers will color the 6Gbps ports differently to differentiate."
Fantastic preview! I am definetly getting sandy bridge now. Apparently the Gigabyte P67-UD7 will have a geforce n200 chipset and support full 16X/16X sli AND crossfire! It will make a significant upgrade from my Phenom 2 and I cannot see myself waiting for bulldozer which has apparently been delayed (gee what a surprise!) until Q4 2011.
Catalyst 8.12... WTF! 2 year old drivers? How much did intel bribe you to use drivers that old for their competition? That is a really bad path to guy down... Tom's did weird stuff like that a while back and lost readers because of it.... You just lost my respect Anand....
I don't think an entire product line of CPU's with on-board graphics is anything really to get excited about, especially for us geeks. I guess I'm just old-school. The Sandy Bridge ,like Clarksdale, has similar benefits from a single-chip chipset which is very appealing from a throughput and control standpoint.
Wow Intel owns when it came to converting video, beating out much faster dedicated solutions, which was strange but still awesome.
I don't know how AMD's going to fare but i hope their new architecture will at least compete with these CPU's, because for a few years now AMD has been at least a generation worth of speed behind Intel.
Also Intel's IGP's are finally gaining some ground in the games department.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
200 Comments
Back to Article
foundchild1 - Friday, August 27, 2010 - link
"For example, today the fastest LGA-1156 processor is the Core i7 880. When Sandy Bridge launches early next year, the fastest LGA-1155 processor will be the Core i7 2600."Shouldn't the second one also read LGA-1156? Are they changing the pin count/socket for this 'tock'?
foundchild1 - Friday, August 27, 2010 - link
Well, that's me being an idiot and not reading the whole article first... New socket indeed.medi01 - Saturday, August 28, 2010 - link
So intel has locked multtipliers because of some other evil companies, eh? To protect the consumers, right?What a shame... :(
jfelano - Sunday, August 29, 2010 - link
Did you even read the article???? Yes its s1155.wazzap123 - Thursday, November 18, 2010 - link
The story of how caches are going to work in the 8+ core world is getting exciting. I like the overview at the daily circuit that summarizes how Niagara 3, Tilera Gx-100, and BlueGene/P processors weigh in on the issue toohttp://www.dailycircuitry.com/2010/11/progress-in-...
dendysutrisna - Friday, August 12, 2011 - link
The Apple iMac 21.5inch is a computer machine which uses the power of Intel Core i5-2400. Look at these page: http://www.bestdealscomputers.net/desktops/new-app... Processors like that, thanks to its strength, could draw the attention of everyone, even computer vendors at the level Apple also has without a doubt to hook them.Grooveriding - Friday, August 27, 2010 - link
It's hard to wade through all this data so quickly. That said, as far as overclocking, the new 2011 socket will be the successor to 1366 ?I hope with all these new overclocking controls there will still be that mainstay $300 CPU that can overclock to some extreme performance. Meaning a successor to the i7 920/930 that can deliver the amazing performance those can overclocked.
I hope this is not the death knell for such a CPU and Intel is expecting us to fork over $1000 for that performance level.
BSMonitor - Friday, August 27, 2010 - link
Good question, but judging by the road map, the Extreme and Performance segments are still Gulftown Processors. I think the 1366 stays for Gulftown.Casper42 - Saturday, August 28, 2010 - link
On the 2P Server side of things, I have been told there will be a Westmere v2 coming in January 2011.This is probably the same family that will produce the i7 990 and the other 1366 chips on the chart that don't exist yet. The Xeon 5600 and 970/980 are damn near identical aside from QPI Links.
Being those are being released in Jan, I wouldn't expect to see a socket 2011 desktop part until basically a year from now.
They will once again be a close relative to the 2P Server family. The socket for the 2P Servers will be Socket R and will be Quad Channel memory as well as supposedly having PCIe 3.0.
bitzao - Friday, August 27, 2010 - link
Yeah but.... will it run Starcraft II ? (on medium)Anand Lal Shimpi - Friday, August 27, 2010 - link
We'll have to wait a little bit to find out... :)hnzw rui - Friday, August 27, 2010 - link
If it doubles Clarkdale's GPU performance, then it probably will (at least on lower resolutions). I'm getting pretty decent framerates from Clarkdale on 1360x768 Low and I've been able to play on 1360x768 Medium with a Radeon HD 4550. I think Sandy Bridge is probably closer to the latter than the former in performance.SteelCity1981 - Friday, August 27, 2010 - link
Any word when Intel will launch a mobile version of this new platform?Anand Lal Shimpi - Friday, August 27, 2010 - link
Q1 2011 :)BSMonitor - Friday, August 27, 2010 - link
So glad I waited on a 13" macbook pro! Sandy Bridge will probably be the next revision for Macbook's ehh?cheinonen - Friday, August 27, 2010 - link
Exactly my thoughts, that the GPU performance looks to be good enough that Apple could use it for the 13" MBP refresh next year. I'll be glad that I decided to wait, that's fur sure.synergist - Friday, August 27, 2010 - link
I was doing some research and if they would have to use the full integrated graphics core, with the 12 cores, to top the performance of the 320M in the current macbook pro 13 I doubt apple would take a step backwards in gfx performance, and use the 6 core integrated gfx.and the performance would still be pretty close, that the 320M would lose to the inter grated gfx (12 cores) by about 10-13%
and llano is still an option, but I have a feeling it will be a dead heat with this.
starfalcon - Friday, August 27, 2010 - link
Apple has to know a lot of the people buying the laptops are far from high end gamers.The amount of people with 320Ms who don't need them is probably a lot.
We'll see how all the different parts of Sandy Bridge work out.
Don't the Core iX processors not work with Nvidia Integrated graphics at all?
JarredWalton - Friday, August 27, 2010 - link
Correct on NVIDIA IGPs not working with Core 2010 (and presumably beyond). The need the QPI interface and Intel isn't licensing that to them.As for Apple, one thing to note is that they've started shipping all laptops with GPUs that support OpenCL it seems, so if Sandy Bridge doesn't have that they may do the same Optimus-style setup as current MBP. Not sure what they'd do with the base MacBook in that case, but Apple seems like they're gearing up to start leveraging OpenCL at some point in the near future. Pure speculation, though, so maybe SB's IGP will be enough, even if it's a step down from G320M.
DanNeely - Friday, August 27, 2010 - link
Aside from on the high end (LGA 1366/2011) the bus nVidia needs is DMI, not QPI. If I was nVidia I'd insist on getting rights to both because QPI is more futureproof. Specifically having more than a few high speed SATA6GB/USB3/etc devices will be able to saturate DMI since it's only the equivalent of a PCIe 4x slot (1.0 speed for 1156, 2.0 for 1155/2011) while QPI is a much higher capacity bus similar to AMD's HyperTransport.While intel seems determined to milk as much out of the (presumably) cheaper to implement DMI bus as it can; sooner or later they're going to either need to mainstream QPI or have the CPU die eat the SATA/USB3 controllers. I find the latter unlikely because it would require cramming even more data lines into the already overcrowded CPU socket area.
DanNeely - Friday, August 27, 2010 - link
Maybe, but IIRC Apple's biggest issue with the Clarkdale platform on smaller laptops was wanting to maintain CUDA support across their entire platform without adding a 3rd chip to the board, not general GPU performance. Unless the Intel/nVidia lawsuit concludes with nVidia getting a DMI license or Intel getting a CUDA license this isn't going to change.Pinski - Saturday, August 28, 2010 - link
I don't think it has anything to do with CUDA. I mean, they sell Mac Pros with AMD/ATI Cards in them, and they don't support CUDA. It's more of OpenCL and high enough performance. However, just looking at these new performance, I'm willing to say that it'll be the next chip for the MBP 13" easily.Pinski - Saturday, August 28, 2010 - link
Well, wait never mind. Apparently it doesn't support OpenCL, which basically puts it out of the picture for Apple to use.starfalcon - Saturday, August 28, 2010 - link
Hmm, they really want all of the systems to have OpenCL?I don't have OpenCL and I don't care at all and I have CUDA but have only used it once.
320M doesn't even have OpenCl does it?
Seems like it would be ok for the less expensive ones to have Intel graphics and the higher end ones to have CUDA, OpenCL, and better gaming performance if someone cares about those.
They'll keep on upgrading the performance and features of Intel graphics though, who knows.
Veerappan - Thursday, September 2, 2010 - link
No, just ... no.Nvidia implements an OpenCL run-time by translating OpenCL API calls to CUDA calls. If your card supports CUDA, it supports OpenCL.
The 320M supports OpenCL, and every Apple laptop/desktop that has shipped in the last few years has as well.
A large portion of the motivation for OS X 10.6 (Snow Leopard) was introducing OpenCL support.. along with increasing general performance.
There is a large amount of speculation that OS X 10.7 will take advantage of the OpenCL groundwork that OS X 10.6 has put in place.
Also, in the case that you have a GPU that doesn't support OpenCL (older Intel Macs with Intel IGP graphics), Apple has written a CPU-based OpenCL run-time. It'll be slower than GPU, but the programs will still run. That being said, I highly doubt that Apple will be willing to accept such a performance deficit existing in a brand new machine compared to prior hardware.
Penti - Saturday, August 28, 2010 - link
It has more to do with nVidia's VP3 PureVideo engine which they rely on for video acceleration. It's as simple as that.Which is why they only find their place in the notebooks. It's also a low-end gpu with enough performance to say run a source game at low res. And they have more complete drivers for OS X.
CUDA is a third party add on. OpenCL isn't.
burek - Friday, August 27, 2010 - link
Will there be a "cheap"(~$300) 6-core LGA-2011 replacement for i7 920/930 or will Intel limit the 6/8 cores to the high-end/extreme price segment ($500+)?DJMiggy - Friday, August 27, 2010 - link
yea I doubt that will happen. It would be like trying to SLI/crossfire an nvidia to an ati discrete. You would need a special chip like the hyrda one.DJMiggy - Friday, August 27, 2010 - link
Hydra even. Hydra Lucid chip.Touche - Friday, August 27, 2010 - link
Questionable overclocking is bad enough, but together with..."There’s no nice way to put this: Sandy Bridge marks the third new socket Intel will have introduced since 2008."
"The CPU and socket are not compatible with existing motherboards or CPUs. That’s right, if you want to buy Sandy Bridge you’ll need a new motherboard."
"In the second half of 2011 Intel will replace LGA-1366 with LGA-2011."
...it is just terrible!
I'll definitely buy AMD Bulldozer, even if it ends up a bit slower. At least they have some respect for their customers and an ability of forward thinking when designing sockets (actually, Intel probably has it too, but just likes to milk us on chipset purchases also). And I am no fanboy, 4 of my 7 PC's are Intel based (two of those 4 were my latest computer purchases).
Touche - Friday, August 27, 2010 - link
And the naming...OMG!There will be i7 processors that require three (3 !!!) different sockets! Maybe even 4 when 2011 comes. Intel can't get their naming right for quite some time now, but they've outdone themselves this time.
ereavis - Monday, August 30, 2010 - link
Processor names really should mean something, even if AMD and Intel don't agree. It's annoying that I have to wikipedia a processor (or memorize a thousand processors) to know what it is. We are still getting quotes for three year old Opterons and Xeons (that we're using as desktops no less), those only add to the annoyance.What ends up happening - good for Intel bad for technology advancement - is non IT type people buying computers are buying DDR2-667 based three-year old desktop processors.
BSMonitor - Friday, August 27, 2010 - link
Ummm, but Bulldozer comes with AM3-r2... Just a sketchier way of saying new MB needed.At least this new Intel isn't trying to BS you. Significant revisions to the architecture require different pin layouts/counts... It is inevitable with processor evolution.
Touche - Friday, August 27, 2010 - link
Actually, it should be AM3 compatible:http://www.tomshardware.com/reviews/bulldozer-bobc...
Even if it's not, AM2/AM3 lasted quite some time.
"At least this new Intel isn't trying to BS you. Significant revisions to the architecture require different pin layouts/counts... It is inevitable with processor evolution."
They know in advance what they need and could design a socket to support multiple processors. And i7/i5/i3 definitely don't need different ones.
BSMonitor - Friday, August 27, 2010 - link
"Even if it's not, AM2/AM3 lasted quite some time."Not all AM2 processors were compatible with AM2+ MB or vice versa, not all AM3 processors compatible on AM2+ MB.
It's still 3 different sockets.
Marketing buddy, marketing.
By the time 1366 is replaced, it will have been on the market for 4 years.
stmok - Saturday, August 28, 2010 - link
Eh, no its not. Bulldozer does NOT work with non-AM3+ mobosAMD engineers made a decision not to make it backward compatible for three reasons.
(1) No one but enthusiasts upgrade their CPUs. People in the real world upgrade their whole computer.
(2) Bulldozer introduces new features that won't work with existing Socket AM3 mobos. (Isn't it bloody obvious when they have to introduce a new socket specification?)
(3) It would cost more money and delays if they were to make a backward compatible version of Bulldozer.
As a result, they made a compromise:
You can take your existing AM3 CPU to AM3+ mobos, while you wait for Bulldozer to arrive. BUT, you can NOT upgrade your existing AM3 based system to Bulldozer.
Simply put...
AM3+ CPU and AM3+ mobo = OK
AM3 CPU and AM3+ mobo = OK
AM3+ CPU and AM3 mobo = Sorry. No.
So it doesn't matter if AMD "Bulldozer" or Intel "Sandy Bridge". You will need a new mobo.
Ard - Friday, August 27, 2010 - link
AMD seriously has their work cut out for them with Bulldozer. The lowest end Sandy Bridge processor absolutely trounced the competition. It's insane what Intel is pulling off here, especially in the integrated graphics arena. Really makes me hope Larrabee comes back as a discrete product in the next few years.dgz - Saturday, August 28, 2010 - link
poor kid, you don't realize 2400 is not nearly lowest end.Finally - Sunday, August 29, 2010 - link
Doesn't that make him a "(filthy) rich kid"?Quodlibet - Friday, August 27, 2010 - link
- based on the shown roadmap, the replacement for the i5 760 is actually the i5 2500(K).- i7 will have even better performance with 8 MB L3 Cache and higher graphics turbo. So there is even more performance potential in the SandyBridge die that Intel could unlock for lower SKUs if needed.
Anand Lal Shimpi - Friday, August 27, 2010 - link
You're correct, I didn't feel a dual vs. quad-core comparison was fair which is why I focused on the 760. I'll clear up the text though :)Take care,
Anand
Anand Lal Shimpi - Friday, August 27, 2010 - link
fixed :)mastrdrver - Friday, August 27, 2010 - link
If we go with what Anand has said and use the roadmap to guess pricing I just have one question then:Why in the world would anyone spend ~$300 for the 2500 and ~$500 on the 2600 then use the on chip gpu with no plans on some kind of discrete?
If the difference between a $600 HP is Llano and Sandy Bridge, Llano has a possibly huge advantage since I think its safe to assume that the gpu side will start at 5450 performance.
Its like Intel would be trying to tell you that SD Xbox 360 is better than HD Xbox 360 (Llano). Are you serious? If Llano can hit a pc at that price point and have a full shader count, Sandy Bridge is dead in the consumer market.
I know that's a lot of ifs and time between here and then but Intel doing what it has always done with graphics (suck) is going to haunt it. I think Intel let the door wide open and its head between it and the frame. All AMD has to do is shut it.
DanNeely - Friday, August 27, 2010 - link
There are people whose workloads are heavily CPU bound but who don't need a heavy duty GPU. Higher end servers and a lot of workstations fall into this category.Beyond that unless Intel made a GPUless die or deliberately disabled the onboard GPU there's no reason not to include it. While we'll have to wait until Intel shows off labeled die shots I doubt that the GPU is a large enough chunk to justify the engineering effort just to save a little on the manufacturing side.
mastrdrver - Saturday, August 28, 2010 - link
You are correct but my point was meant to be on "Best Buy" systems and not server or workstations. Sorry if I didn't get that clear.On the server front this will have to go up against Bulldozer which is an entirely different topic.
While it would be foolish for Intel to make a gpuless die since integration with the cpu side is inevitable, Larabee or what ever better be good. Then there is the driver thing. That Dragon Age Origin picture sure doesn't look right. For drivers that still have work to do, that picture looks exactly like the one from when Clarkdale was released. I'd be a little surprised if much driver work is left if those two pictures are actually different.
arh2o - Friday, August 27, 2010 - link
How much will these new 1155 motherboard prices be? Will they be in the same price range as the current 1156 motherboards?Rajinder Gill - Saturday, August 28, 2010 - link
I'd imagine the P67 boards will be priced between $150~$200.odin607 - Friday, August 27, 2010 - link
What about temps =(cmdrdredd - Friday, August 27, 2010 - link
I'm never buying an Intel CPU or Motherboard ever again. This is one area that made them what they are today. The ability to take a mid range part and clock it up is what made the Core 2 series such a success with gamers and other performance enthusiasts. Not all of the success is attributed to overclocking, but a good bit of the popularity came from a $200 CPU being able to clock up to levels that the $700+ cpus hit. Now, if the unlocked parts can hit big overclocks and aren't overpriced then maybe it'll work out. However, it's all to easy for Intel to give us the finger and price a $200 CPU at $600 because it's unlocked and say "tough crap, if you want to overclock then pay up!". I am hopeful it doesn't come to this.Anyway, quads are old news IMO...I'm looking at 6core for my next one.
JumpingJack - Saturday, August 28, 2010 - link
"but a good bit of the popularity came from a $200 CPU being able to clock up" ...Reading the preview, it looks like the 2500K may fit this description.
From the articles:
"...and the 2500K will replace the i5 760/655K ($205 - $216). ..."
Even the 875K, when it launched, wasn't as you claim... it actually came in 200 bucks cheaper than the 870.
http://techreport.com/articles.x/18988
It would seem to me that Intel has been planning this change for sometime and went out to address this....
tech6 - Friday, August 27, 2010 - link
More speed with less power - it looks like a very competitive product. I really hope that AMD has something up their sleeve with Bulldozer and Bobcat to compete with Sandy Bridge.killless - Friday, August 27, 2010 - link
17% higher performance is just not exciting.You need to give me 50% improvement at least to make me want to spend $1000 for new CPU/Motherboard/memory.
It really hasn't been all that exciting since Core 2 Quad...
tatertot - Friday, August 27, 2010 - link
I take it turbo was also disabled on the rest of the parts used to compare, right?Anand Lal Shimpi - Friday, August 27, 2010 - link
Turbo was enabled on everything else - SB performance should be higher for final parts.Take care,
Anand
tatertot - Friday, August 27, 2010 - link
Oh!Well that puts the IPC gains of Sandy over Westmere at something like 20% then, considering the 880 turbos up to 3.73GHz on single-threaded work.
That's pretty impressive.
Drag0nFire - Friday, August 27, 2010 - link
I just want to say first of all, this totally made my Friday! I love previews of upcoming architectures!Any news on the roadmap for the mobile variant of Sandy Bridge? Or do I have to wait til IDF?
Jamahl - Friday, August 27, 2010 - link
What system was this benchmarked on Anand?Anand Lal Shimpi - Friday, August 27, 2010 - link
Clarkdale - those charts were actually pulled from here, just with the SB numbers added:http://www.anandtech.com/show/2952/2
We didn't have the system for long enough to rerun the tests with the 5450 on the H67 board. The 5450 is GPU bound at those resolutions/settings however:
http://images.anandtech.com/graphs/5450_0203102236...
Those numbers were generated with a Core i7 920.
Take care,
Anand
Anand Lal Shimpi - Friday, August 27, 2010 - link
I just ran a sanity check on the Core i7 880 with the 5450, the numbers don't move by more than the normal test margins - the 5450 is totally GPU bound here.Take care,
Anand
ESetter - Friday, August 27, 2010 - link
Do you know if any of the benchmarks make use of AVX instructions? Sandy Bridge effectively doubles the maximum throughput for compute-intensive operations like SGEMM and DGEMM. While it might not translate to a 2x speedup in real-world applications, I imagine it should give a significant gain, at least in the HPC field.Anand Lal Shimpi - Friday, August 27, 2010 - link
I don't believe any of these apps have AVX support, they're all too old for that.Take care,
Anand
ESetter - Friday, August 27, 2010 - link
Thank you for the quick answer. It would be great to include some software with AVX support in the full review, when Sandy Bridge launches. Probably the Intel Math Kernel Library will be updated in time.darckhart - Friday, August 27, 2010 - link
1. i'd like to see some temp numbers. along with, does intel stock hsf actually do the job here? (which they have been getting better at really)2. i didnt see anything about accelerated hd video playback using the on die gpu?
3. sure these cpu look great from price point performance gain....until you realize you need a full platform upgrade to go along with it...which if we assume mainstream mobo around the 100$ mark and ram to match since they're taking away the bclk deal... and every 2 yrs is a bit too soon for full platform upgrade imo.
4. hardware virtualization parts? i know the current i3 vs i5/7 chips had some stuff disabled. will these SB chips follow the same profile?
5. mobile versions? we know the mobile ones are usually cut back to fit low tdp profile. will the same cuts apply like the current mobile i3/i5 parts (eg, no real quad core parts)? otoh, what about the quad core mobiles? the current i7 mobile quads are laughable at their performance and heat output (i'm looking at you first gen hp envy). do you think these SB quad mobiles will actually be decent?
DanNeely - Friday, August 27, 2010 - link
Wikipedia lists both 2 and 4 core mobile parts. Not definative but they generally do a good job of keeping up with the latest leaks for things like this.http://en.wikipedia.org/wiki/Sandy_Bridge_(microar...
hamitaltintop22 - Friday, August 27, 2010 - link
I hope there is a price drop for the i5 750 to around $150 when this comes out or i7 920 to $200 (no microcenter here).DesktopMan - Friday, August 27, 2010 - link
I'm not sure about this, but I seem to recall having read that aes-ni instructions use the GPU, at least partially. Makes sense as the gpu is excellent at parallel tasks. If this is the case, would the 6 EU part perform differently than the 12 EU part at AES?Any news on when the inevitable Q67 would launch? I guess it's likely that Q67 will use AMT 6.0 as it was a pretty recent upgrade.
With sata III support at launch you'd imagine they'd also support sata III on their gen 3 SSDs. Time will tell I guess.
overclocking101 - Friday, August 27, 2010 - link
wow bummer. welcome to the end of intel Bus speed overclocking. I will not be adapting the new sockets unless something happens and intel changes their minds. overclocking is not as easy as switching multiplyers even EE cpu's of nowadays show that. 90% of the high overclocks with EE cpus show that a mixture of multi and bus speed is needed. i sense though that with the higher end socket intel will allow it. if not i think its a very bad move on their part.starfalcon - Friday, August 27, 2010 - link
I don't think any of the sandy bridge graphics will be able to get to GT 240 levels.This one trades blows with the 5450 as we can see, and just looking at 3DMark06 scores the 5450 scores about 3500 or so, while the GT 240 does maybe 9000 or 10000.
If the more powerful sandy bridge graphics can get up to 4000 or 5000 or so that would be great, that would be beating the 9400 GT and closing in on the 9500 GT, not getting to GT 240 levels though. Wonder what the next integrated graphics after this will be like.
TETRONG - Friday, August 27, 2010 - link
I take it this means it will soon be the optimum time to purchase current-gen technology at significantly reduced prices?Just wanting to build a no nonsense system at slightly below the current price/performance sweet-spot.
Seems Intel are only interested in toying with consumers.
They've wasted die space that could've been used for a more capable CPU. How many years have we been chained under 4Ghz frequency? 5 years or so?
Nine women can't make a baby in one month! Not every problem is parallelizable - we need greater frequencies/efficiencies.
Now they are locking processors and playing games with the sockets. No USB 3.0!!?
Garbage, No Thanks!!!
Seems you are giving them a free pass Anand. Very convenient timing to steal AMD's thunder, eh!
I love you man - big fan since the beginning, but you should read Scott Wasson over at Tech Report. Those value scatterplots are very helpful to me - these regurgitated press releases, not so much.
Sorry;(
To be so harsh, but we deserve better than these kiddie chips!
Only you can hold them accountable for these failures of imagination.
wyvernknight - Friday, August 27, 2010 - link
I am a bit disappointed. Seems like since intel is wiping the floor with AMD, decided it was OK to screw us all over with this socket thing. I will still buy an intel processor if AMD has no cards to play, but i wont be pleased.DrRap - Friday, August 27, 2010 - link
It's Anand "intel" lal Shimpi.Anand Lal Shimpi - Friday, August 27, 2010 - link
I agree that single threaded performance is important to keep in mind. Sandy Bridge had a larger ILP boost than I expected. Final silicon with turbo enabled should address that even more.We got into trouble chasing the ILP train for years. At this point both AMD and Intel are focused on thread level parallelism. I'm not sure that we'll see significant ILP gains from either party for quite a while now.
The socket move is silly, unfortunately there's nothing that can be done about that. AMD takes better care of its existing board owners, that's something we've pointed out in prior reviews (e.g. our Phenom II X6 review).
I'm not sure I'd call Sandy Bridge a kiddie chip however. It looks like it'll deliver great bang for your buck when it launches in Q1 regardless of how threaded your workload is.
Value scatterplots are a great idea, Scott does a wonderful job with them. We're going to eventually integrate pricing data with Bench (www.anandtech.com/bench) which should help you as well :)
Take care,
Anand
ssj4Gogeta - Saturday, August 28, 2010 - link
I'm guessing USB 3.0 support will be introduced later with a chipset upgrade. Why are you so concerned with GHz when Sandy Bridge delivers more IPC? I think having better IPC instead of more GHz is better as you'll get potentially lower power consumption.asmoma - Friday, August 27, 2010 - link
Lets just hope AMD trhows in 80 gpu cores into ontario to bring this SB igp to shame(almost the same performance but less than 10w tdp). And lets also hope they throw in those 400 cores into Llano we have been hearing about.mfago - Friday, August 27, 2010 - link
Any news on OpenCL support? I image Apple may hold off on a purely integrated GPU unless that is supported.Thanks!
Anand Lal Shimpi - Friday, August 27, 2010 - link
Sandy Bridge's GPU does not support OpenCL. This is strictly a graphics play, Intel doesn't have an announced GPU compute strategy outside of what it's doing with Larrabee.Take care,
Anand
DanNeely - Friday, August 27, 2010 - link
Is intel actually still doing anything with Larrabee on the gfx side? I thought they killed it on that end entirely and were looking at it strictly as a compute platform now.Anand Lal Shimpi - Saturday, August 28, 2010 - link
Correct - as of today the only Larrabee parts are for the HPC market. Didn't mean to confuse there :)Take care,
Anand
JonnyDough - Friday, August 27, 2010 - link
"Correction, you'll be able to buy it next year, but you'll get to meet her today."Sandy could be a boy too!
JonnyDough - Friday, August 27, 2010 - link
By the way, is it a an it, or a girl? You can't have it both ways!gruffi - Friday, August 27, 2010 - link
Why not comparing with a HD 5570? That is what Llano is supposed to have, Redwood-class IGP. An HD 5450 is quite pointless. It just reflects competition for Ontario. But Sandy Bridge is not Ontario's competition.And what about image quality or GPGPU support? Pure FPS numbers are only half of the truth.
wiak - Friday, August 27, 2010 - link
dont think so, its said that AMD's Fusion built-in GPU will have 400 SPUs (HD 5670 level-graphics), thats a far cry from HD 5450's 80 SPUs ;)so if you wanna game you still have to use something from a real graphics manufacture like AMD when it comes to GPUs bult into CPUs, as a added bonus you also have updated drivers and a decade old DirectX 9 compatibility, so you old games work without any big problems
icrf - Friday, August 27, 2010 - link
I am impressed that you have a functioning sample at least four months before it's available, run it through enough paces for a review like this, and they let you release the numbers. I mean, are they trying to suppress holiday sales?When do you think you'll have a Bulldozer sample from AMD to run a similar preview? Barring a surprise from AMD, at this point, it looks like I'll be building an i7 2600 early next year. The similar spec chip from today is an i7-975 Extreme, which is the fastest quad core in the bench, and Sandy Bridge runs 13-14% faster in the only benchmark I care about (x264). I guess even that might change significantly if it can take advantage of this "alleged on-die video transcode engine." I'd not heard of that before.
Anand Lal Shimpi - Friday, August 27, 2010 - link
Honestly we're probably several months out from having Bulldozer silicon in a similar state. With the past few generations of Intel CPUs, by around 4 - 6 months before launch we're usually able to get access to them and they perform very well.With AMD the lead time is far shorter. I don't expect us to have access to Bulldozer silicon that's worth benchmarking until Q2 2011 at the earliest. I'm more than happy to be proven wrong though :-P
icrf - Friday, August 27, 2010 - link
I guess I'm mostly surprised that Intel would do it. Conroe made sense. They had to show the world as early as possible that they had something significantly faster than AMD, suppressing sales of that for their own a little later. But now that they own that performance crown, why show previews so many months early? I suppose I could be over-analyzing it and the vast majority of the market couldn't care less so it makes little difference to their bottom line. Bragging rights simply make for good PR.Sad to see Bulldozer so far out. I assume the server chips will ship before the consumer ones, too, so it'll be at least a solid year before it could be in my hands, anyway. Oh well. To be honest, my C2D E6400 still does well enough for me. Maybe I'll just make my upgrade an Intel G3 SSD. If I got both that and SB, I don't know what I'd do with myself.
Thanks, and keep up the good work.
Anand Lal Shimpi - Saturday, August 28, 2010 - link
This preview wasn't Intel sanctioned, I believe Intel will release its own numbers at IDF in a few weeks.Take care,
Anand
icrf - Saturday, August 28, 2010 - link
Oh, I had assumed you got this chip from Intel and they had a typical NDA that said when you could talk about what you found. Where'd it come from, then? One of Intel's motherboard partners with whom you have a friendly relationship?aegisofrime - Saturday, August 28, 2010 - link
I must say, I'm really grateful for this article. I'm in the middle of planning an upgrade and information like this is really valuable to me. (and I guess to a lot of people as well!) I would just like you to know that your articles actually do influence some of our buying choices. So... Thank you! :DNow, all I need is a Bulldozer preview and all the pieces are in place...
vajm1234 - Friday, August 27, 2010 - link
few things clear and few unclear as of nowthis sandy bridge review sample here do not have TURBO enabled. The CPU runs at 3.1GHz all the time, regardless of workload as anand stated
it says "Both the CPU and GPU on SB will be able to turbo independently of one another. If you’re playing a game that uses more GPU than CPU, the CPU may run at stock speed (or lower) and the GPU can use the additional thermal headroom to clock up. The same applies in reverse if you’re running something computationally intensive."
QUESTIONS
Q} will the on die GPU unit work in tandem with the other discrete GPUs out there or it will shut off? if yes will it work when sli or crossfire is enabled :p
Q} whatever the above statement says will it happen if we use discrete graphics from nvidia or ati?
Q} will there be any possibility to disable ONLY GPU and in certain cases ONLY its TURBO FEATURE
Q} any possibility to remain the GPU overclocked the whole time when cpu is IDLE
Q} what about accelerated hd video playback using the on die gpu?
Q} it support VT-x and AVX is it possible for you anand to use specific benchmark for these instructions, same request goes for the AMD
Q} as someone asked will there be a cheap 6 => core processor for mainstream market
Q} again as per the last comment ......When do you think you'll have a Bulldozer sample from AMD to run a similar preview?
this Ques Must be answered
all n all what i think even if there is a 15-19% perf. Jump its not worh the spending when u consider u have to upgrade the entire platform
and moreover limiting Overclocking features damm! a retarded decision i am not in a mood for amd but if the overclocking hits then i will move 10000...% :angry:
regards
DanNeely - Saturday, August 28, 2010 - link
If you're asking about an SLI/CFX pairing with the IGP almost certainly not. The only company to ever attempt something like that has been Lucid with the Hydra chip and the results have been less than impressive. Architecturally I don't know that it'd even be possible for them to try with the on die GPU. The Hydra chip sat between the CPU and the Gfx cards on the PCIe bus and looked like a single card to the OS. There's no way for them to insert themselves into the middle of the connection to the IGP.teohhanhui - Saturday, August 28, 2010 - link
Just something like nVidia Optimus? Perhaps Intel could come up with a more elegant solution to the same problem...hnzw rui - Friday, August 27, 2010 - link
Hmm, based on the roadmap I actually think the i7-2600K will be priced close to the i7-875K. The i7-950 is supposed to drop to $294 next week putting it in the high end Mainstream price range (it'll still be Q3'10 then). Also, all the $500+ processors are in the Performance category (i7-970, $885; i7-960, $562; i7-880, $562).If the i7-2600K goes for $340 or thereabouts, I can already see supply shortages due to high demand (and the eventual price gouging that would follow).
liyunjiu - Friday, August 27, 2010 - link
How are the comparisons between NVIDIA low end discrete/mobile graphics?tatertot - Friday, August 27, 2010 - link
Hey Anand,How could you tell that this sample had only 6 execution units active in the GPU vs. the full 12?
Was it just what this particular SKU is supposed to have, or some CPU-Z type info, or... ?
thx
Anand Lal Shimpi - Saturday, August 28, 2010 - link
Right now all desktop parts have 6 EUs, all mobile parts have 12 EUs. There are no exceptions on the mobile side, there may be exceptions on the desktop side but from the information I have (and the performance I saw) this wasn't one of those exceptions.Take care,
Anand
steddy - Saturday, August 28, 2010 - link
"all mobile parts have 12 EUs"Sweet! Guess the good 'ol GeForce 310m is on the way out.
mianmian - Saturday, August 28, 2010 - link
The mobile CPU/GPU usually has much lower frequency.I guess the 12EU mobile GPU will perform on pair with the desktop 6EU one.
IntelUser2000 - Saturday, August 28, 2010 - link
That seriously doesn't make sense. Couple of possible scenarios then.-Performance isn't EU bound and 2x EUs only bring 10-20%
-The mobile parts are FAR faster than desktop parts(unlikely)
-The mobile parts do have 12 EUs, but are clocked low enough to perform like the 6 EU desktop(but why?)
-There will be specialized versions like the i5 661
DanNeely - Sunday, August 29, 2010 - link
Actually I think it does. Regardless of if they 6 or 12EU's it's still not going be a replacement for any but the bottom tier of GPUs. However adding a budget GPU to a desktop system has a fairly minimal opportunity cost since you're just sticking a card into a slot.Adding a replacement GPU in a laptop has a much higher opportunity cost. You're paying in board-space and internal volume even if power gating, etc minimizes the extra power draw doubling the size of the on die GPU will cost less than adding an external GPU that's twice as fast. You also can't upgrade a laptop GPU later on if you decide you need more power.
Anand Lal Shimpi - Tuesday, August 31, 2010 - link
I spoke too soon, it looks like this may have been a 12 EU part. I've updated the article and will post an update as soon as I'm able to confirm it :)Take care,
Anand
tatertot - Tuesday, August 31, 2010 - link
Can you also confirm whether or not the GPU turbo was also disabled?DanNeely - Saturday, August 28, 2010 - link
Do you think Intel will be sharing preliminary performance/pricing data on LGA 2011 by the time that the first LGA 1155 parts start shipping? I'm on 1366 now and would like to know if staying on the high end platform will be a reasonable option or if there isn't any point in holding off for another 6 months on my upgrade.Anand Lal Shimpi - Saturday, August 28, 2010 - link
I wouldn't expect any near-final LGA-2011 performance data until Q2 next year, well after the LGA-1155 launch.Take care,
Anand
Casper42 - Saturday, August 28, 2010 - link
2 things jumped out at me1) No USB3 - Major FAIL. Putting USB3 in an Intel chipset will drive huge adoption rates rather than this limping in BS by manufacturers today. Not to mention that for Hard Drives, USB2 has been a bottleneck for a long time whereas only top end SSDs today are maxing out SATA3
2) 2 chips with Quad Core and no HT that are identical except for Clock speed and one of them is essentially the 400 and the other is the 500? WTF? Call them the 2410, 2420, 2430, etc. That gives you like 8 or 9 speed bins for that family. Whomever is doing the numbering at Intel needs a swift kick to the head to get them back on track mentally as things just get more and more confusing. You have the i3/i5/i7 today, why not just change it to:
i2 = Dual Core no HT/Turbo
i3 = Dual Core with HT and/or Turbo
i4 = Quad Core no HT/Turbo
i5 = Quad WITH
i6 = Six without
etc
As it stands now we have i5 with both dual and quad core and i7 with 4 and 6. just doesnt make sense.
dertechie - Saturday, August 28, 2010 - link
That's quite the IPC improvement there. Not quite Netburst to Core 2 but a lot more than I expected (I was expecting something on the order of 5%, with most gains coming from ramping clocks with the extra headroom of 32nm).Question is, do I want the i5-2500K more than I loathe Intel's motherboard department? I'm seeing them bring out new sockets almost as often as new processor families, which really, really does not make me confident in the socket's future.
I will wait at least for Bulldozer benches before buying whatever makes sense at that time (okay, probably weighted in AMD's favor). I've lasted 4 years on this Pentium D, I can live another half of one.
IntelUser2000 - Saturday, August 28, 2010 - link
Why do some people still compare Netburst vs. Core 2? The Pentium 4 generation was a clock speed focused that that FAILED to realize its clock speed potential so it looked really bad compared to Core 2.Compared to Core Duo Core 2 was only 15-20% faster. Sandy Bridge manages to do another 20%, which is really good in a generation, yea?
ssj4Gogeta - Saturday, August 28, 2010 - link
Pentium D to SB will be such a huuuuge jump, lol.neslog - Saturday, August 28, 2010 - link
Your excellent article was exciting to read. Thank you!I noticed a small typo on the Windows 7 Gaming Performance page in the first line under the Data Recovery chart : "Clock for clock...to the i7{5} 760..."
ET - Saturday, August 28, 2010 - link
I think that the integrated graphics here are a game changer. Sure nobody will look to them for serious gaming, but finally they're at a point where if you buy any CPU you will be able to play most games, even if at low settings. I'll be looking forward especially to the mobile CPU's. With Bobcat around the corner, I guess next year we will finally see mainstreams notebooks become capable of some game playing, which will be great (and bad for NVIDIA).Exodite - Saturday, August 28, 2010 - link
What I'd like to see is something like Nvidia's Optimus make it to the desktop. With both AMD and Intel going for on-chip integrated graphics the market is practically begging for a unified standard for graphics switching.The next-generation IGPs look to be competent enough for anything but high-end gaming, which means I should be able to power down my discrete graphics card completely most of the time. The end result would be significant reductions in noise generation, power usage and heat emissions.
Having discreet graphics cards reduced to basically connector-less, slot-in cards for on-demand co-processing seems the logical step.
AndreC - Saturday, August 28, 2010 - link
Hi there.. I`m currently a freelance 3D generalist.. and I was going to upgrade my old Core 2 Quad QX6700 with a Core i7 980X. But now i`m not that confident. Sandy bridge looks amazing, I was sad seeing the new socket for sandy bridge, it does not compell me to buy a new motherboard now... does anyone know if the 1366 socket will stick with the nex gen High end market? I dont want to shoot myself in the foot here.AndreC - Saturday, August 28, 2010 - link
Sorry dind`t read the last frase...Great Review btw. cheers
sdsdv10 - Saturday, August 28, 2010 - link
As noted in the Intel roadmap in the article, for at least part of 2011 they will be sticking with 1366 for the release of the Core i7 990X (to replace the 980X). However, after that the Intel performance platform will switch over to socket LGA-2011. Here is a quote from the articlea (page 3)."Original Nehalem and Gulftown owners have their own socket replacement to look forward to. In the second half of 2011 Intel will replace LGA-1366 with LGA-2011. LGA-2011 adds support for four DDR3 memory channels and the first 6+ core Sandy Bridge processors."
AndreC - Saturday, August 28, 2010 - link
Yeah.. as I said "Sorry dind`t read the last frase..." but thx anyway..It`s a shame to be always changing sockets, but probabily a necessity to evolve the technology.
Kaihekoa - Saturday, August 28, 2010 - link
Having the first chips target the mainstream market is a very smart move by Intel because that's where AMD makes it's money. I'm honestly not impressed by the performance numbers, but I am impressed by the overall performance, power consumption, and pricepoints for these next gen CPUs. What I'm really looking forward to is the performance segment of Sandy Bridge.mino - Saturday, August 28, 2010 - link
Intel's mainstream is not where AMD's is.Especially in 2011.
Ontario:
. . . CPU - above Atom, under everything else
. . . GPU - 5450/Sandy class
Lliano:
. . . CPU - 2C Sandy class
. . . GPU - 5650 class (at least 3x Sandy)
Bulldozer Desktop(8C):
. . . CPU - 4C Sandy Class
. . . GPU - discrete 5750+ class
So basically AMD's platform in the Intel's "mainstream" $200+ class will be a Bulldozer with discrete GPU. Aka AMD's high end stuff.
silverblue - Saturday, August 28, 2010 - link
Not sure I agree with that. From AMD's own figures, Bulldozer is significantly faster than STARS. It would be more realistic to expect Bulldozer to perform closely to Sandy Bridge, however we really need more benchmarks before we get a true idea. Bulldozer looks great on paper, but that's virtually all we have so far.In any case, you compared Bulldozer to "4C Sandy Class", which would be an 8-thread Sandy Bridge, and thus - at least relatively - high end. And I'm not getting into the core/module argument again... ;)
mino - Sunday, August 29, 2010 - link
What I wanted to point out is that Intel sees the 4C Sandy as a "mainstream" part.Reason being they are moving HUGE amounts (compared to AMD) of $150-$250 parts.
On the other hand, AMD sees the mainstream at $100-$200 and that is a Llano market.
For AMD, Zambezi is high-end that justifies discrete GPU.
And Yes, Bulldozer 8C should compare with 4C Sandy favorably, (it would mostly go to pricing).
tatertot - Sunday, August 29, 2010 - link
BD 8C is going to be up against 8C and 6C Sandy on LGA-2011 in the client space.mino - Monday, August 30, 2010 - link
am sure AMD WOULD like it that way. But no. Not really.8C Bulldozer versus 6C Sandy might actually be competitive.
However 32nm SOI is a new process so we might as well forget about 4GHz parts for now.
Also, Sandy 6C is Q4 part and 8C is most probably 2012 part.
overzealot - Saturday, August 28, 2010 - link
Now, that's a name I've not heard in a long time. A long time.mapesdhs - Saturday, August 28, 2010 - link
Seems to be Intel is slowly locking up the overclocking scene because it has no
competition. If so, and Intel continues in that direction, then it would be a great
chance for AMD to win back overclocking fans with something that just isn't
locked out in the same way.
Looking at the performance numbers, I see nothing which suggests a product that
would beat my current 4GHz i7 860, except for the expensive top-end unlocked
option which I wouldn't consider anyway given the price.
Oh well, perhaps my next system will be a 6-core AMD.
Ian.
LuckyKnight - Saturday, August 28, 2010 - link
Do we have something more precise about the release date? Q1 is what - Jan/Feb/March/Apri?Looking to upgrade a core 2 duo at the moment - not sure whether to wait
mino - Saturday, August 28, 2010 - link
Q1 (in this case) means tricle amounts in Jan/Feb, mainstream availability Mar/April and worth-buying mature mobos in May/June timeframe.tatertot - Saturday, August 28, 2010 - link
Intel has already announced that shipments for revenue will occur in Q4 of this year. So, January launch.They've also commented that Sandy Bridge OEM demand is very strong, and they are adjusting the 32nm ramp up to increase supply. So January should be a decent launch.
Not surprising-- these parts have been in silicon since LAST summer.
chrsjav - Saturday, August 28, 2010 - link
Do modern clock generators use a quartz resonator? How would that be put on-die?iwodo - Saturday, August 28, 2010 - link
Since you didn't get this chip directly from Intel , i suspect there were no reviews guideline for you to follow, like which test to run and what test not to run etc.Therefore those benchmark from Games were not a results of special optimization in drivers. Which is great, because drivers matter much more then Hardware in GPU. If these are only early indication of what Intel new GPU can do, i expect there are more to extract from drivers.
You mention 2 Core GPU ( 12 EU ) verus 1 GPU ( 6 EU ), Any Guess as to what "E" stand for? And it seems like a SLI like tech rather then actually having more EU in one chip. The different being SLI or crossfire does not get any advantage unless drivers and games are working together. Which greatly reduces the chances of it working at full performance.
It also seems every one fail to realize one of the greatest performance will be coming from AVX. AVX will be like MMX again when we had the Pentium. I cant think of any other SSE having as great important to performance as AVX. Once software are specially optimize for AVX we should get another major lift in performance.
I also heard about rumors that 64bit in Sandy Bridge will work much better. But i dont know if there are anything we could test this.
The OpenCL sounds like a Intel management decision rather then a technical decision. May be Intel will provide or work with Apple to provide OpenCL on these GPU?
You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??
If Intel ever decide to finally work on their drivers, then their GPU will be great for entry levels.
Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU.
Is GPU memory bandwidth limited?
Any update on Hardware Decoder? And what about transcoding part?
Would there be ways to lock the GPU to run at Turbo Clock all the time? Or GPU gets higher priority in Turbo etc..
How big is the Die?
P.S - ( Any news on Intel G3 SSD? i am getting worried that next Gen Sandforce is too good for intel. )
ssj4Gogeta - Saturday, August 28, 2010 - link
I believe EU means execution units.DanNeely - Sunday, August 29, 2010 - link
"You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??"PCIE 2.0 included other low level protocol improvements in addition to the doubled clock speed. Intel only implemented the former; probably because the latter would have strangled the DMI bus.
"Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU."
Probably. The performance gains vs the previous generation isn't that large and it was enough for anything except pathological test cases (eg memory benchmarks). If it wasn't there'd be no reason why Intel couldn't officially support DDR3-1600 in their locked chipsets to give a bit of extra bandwidth.
chizow - Saturday, August 28, 2010 - link
@AnandCould you please clarify and expand on this comment please? Is this true for all Intel chipsets that claim support for PCIe 2.0?
[q]The other major (and welcome) change is the move to PCIe 2.0 lanes running at 5GT/s. Currently, Intel chipsets support PCIe 2.0 but they only run at 2.5GT/s, which limits them to a maximum of 250MB/s per direction per lane. This is a problem with high bandwidth USB 3.0 and 6Gbps SATA interfaces connected over PCIe x1 slots. With the move to 5GT/s, Intel is at feature parity with AMD’s chipsets and more importantly the bandwidth limits are a lot higher. A single PCIe x1 slot on a P67 motherboard can support up to 500MB/s of bandwidth in each direction (1GB/s bidirectional bandwidth).[/q]
If this is true, current Intel chipsets do not support PCIe 2.0 as 2.5GT/s and 250MB/s is actually the same effective bandwidth as PCIe 1.1. How did you come across this information? I was looking for ways to measure PCIe bandwidth but only found obscure proprietary tools not available publicly.
If Intel chipsets are only running at PCIe 1.1 regardless of what they're claiming externally, that would explain some of the complaints/concerns about bandwidth on older Intel chipsets.
chizow - Saturday, August 28, 2010 - link
OK it seems as if you were referring to the PCIe lanes connected off the actual P67 chipset, not the native PCIe controller integrated into the CPU. I do recall the P55 chipset supporting PCIe 2.0 but limiting it to PCIe 1.0 bandwidth for interconnects like USB or SATA controllers.chizow - Saturday, August 28, 2010 - link
Overall it looks like Sandy Bridge is a disappointment. One really has to question why Intel has reversed their Tick Tock cadence this time around by launching their low-mid range parts and platform so soon on the heals of P55/Lynnfield/Clarkfield, but I guess it makes more sense in the light of the fact Intel delayed that platform's launch for nearly a year. I would be EXTREMELY disappointed if I bought a P55 board in the last year only to find out Intel is again requiring a platform/socket change for what appears to be a marginal upgrade.There's also some clear deficiencies and disappointments in terms of improvements over last-gen platforms with P67:
1) No additional L2/L3 cache, in some cases less than previous gen.
2) No native USB 3.0 support. One can conjure up myriad reasons for why Intel is resisting USB 3.0 adoption, but its clearly obvious at this point that they have been resisting it since its inception.
3) Limited SATA 6G support. 2/8 ports I believe, but still better than nothing I suppose.
4) No additional PCIe lanes or PCIe 3.0 support, but at least they're finally going to support their actual PCIe 2.0 rated specs?
5) Limited/reduced overclockability. Big mistake imo, Intel seems to be forgetting why AMD was the enthusiast's choice back in the early Athlon/P4 days.
That leaves us with the major improvements:
1) 5-15% improvement clock-for-clock and core-for-core compared to older Nehalem and Westmere architectures.
2) Lower TDP
3) 2x faster GPU that's still too slow for any meaningful gaming
Hopefully the high-end X58 replacement platform offers bigger improvements. There's also some question as to whether LGA2011 will be HPC/server only and an intermediary platform (LGA1355) is to replace LGA1366, however, early rumors show it will introduce or improve upon many of the deficiencies I listed with P67 and show us what Sandy Bridge is really capable of. Get rid of that extraneous, massive GPU on the high-end and replace it with more L3 and execution units and we'll see some bigger gains than the underwhelming 5-15% we see with this first version of Sandy Bridge.
seapeople - Saturday, August 28, 2010 - link
Remember, that 5-15% clock-for-clock increase includes turboboost functioning on the current processors, which generally ratchets up the clock speed even in heavily multithreaded loads. It looks like the IPC increase with Sandy Bridge is at least 20% here. I would consider that fairly significant considering that Intel is already on top of the market with no real competition, other than for AMD to sell its top-of-the-line CPU's for cheap.It's also weird to see people deride IGP improvements that double the performance of the previous version. These integrated graphics are sufficient for probably 85% of the market (pretty much everyone who doesn't need to play current high-end games). Basically, the majority of people will be getting a free $50 graphics card built in to their processor, which itself is giving you a 20-40% performance improvement over a similarly priced last-gen processor.
chizow - Saturday, August 28, 2010 - link
Yeah I actually factored Turbo Boost not working on Sandy Bridge, as otherwise it would probably be closer to 0-10% increase clock-for-clock. Anand pegs SB ~10% faster overall clock-for-clock in his conclusion with another 3-7% with Turbo.Also, tempering any excitement over that 10% IPC increase we have the very bad news about Intel limiting overclocking significantly, so for virtually anyone who already owns a P55/Lynnfield/Clarkfield combo anything but a "K" designated chip may actually be a downgrade as you won't be able to easily enable your own homebrewed "Turbo" any longer with most Sandy Bridge SKUs. I'd say the nearly guaranteed 30-40% OC you lose far outweighs the prospective 10-15% clock-for-clock gain you'd see with Sandy Bridge.
As for the IGP being sufficient or any great accomplishment with what Sandy Bridge brings...I'd disagree. Sure I guess its great news for Intel that SB is actually able to adequately accelerate 1080p, but its still far from replacing even mid-range parts from 2-3 generations ago. If Anand perhaps ran some benchmarks at resolutions and settings people actually used it might be more relevant but the fact of the matter is, ~80% of "gamers" are gaming at resolutions of 1280x1024 or higher according to Steam Survey: http://store.steampowered.com/hwsurvey/
My issue with the IGP is its going to take up significant die space, I estimate at least as much die area for the 2C IGP relative to the rest of the 4C CPU using Clarkdale as a guideline. For those who have no interest in an IGP or go with the P67 platform that doesn't even support it, that's a waste of die space that you're still absorbing and paying for.
I just find it amazingly ironic how times have changed where the CPU was once thought of as the "general purpose" ASIC and the GPU was the "fixed-function", inflexible ASIC. How times have changed. With Sandy Bridge, we now have the CPU, an on-die IGP, and now even talk of an integrated super-sekret hardware video transcoder! Roles have clearly reversed as the CPU becomes ever-increasingly segmented and specialized while the GPU continues to evolve toward general purpose flexibility.
In that sense, I really think AMD has the right approach with Fusion, as their ALU and FPUs will be shared on their Bulldozer and Bobcat designs rather than segregated and specialized like on Sandy Bridge with its single-purposed CPU cores and IGP EUs.
DanNeely - Sunday, August 29, 2010 - link
80% of steam users is not the same thing as 80% of total PC buyers, or even 80% of the total gamers (think facebook games, etc). Serious gamers are not, any more than overclockers, a core market for Intel or AMD's CPU divisions.chizow - Sunday, August 29, 2010 - link
Yes I'm well aware Steam users do not make up 100% of the total PC market, but I would say it is a fair representation of the kind of hardware and resolution actual gamers use. In those same browser-based games you're referring to, any existing IGP would have been adequate but that's clearly not the market Intel is trying to entice or the point of the comparison, buyers who would otherwise choose discrete GPUs.As you can see, most of these users are not using Intel IGPs (only 7%) because they are inadequate for actual gaming at the resolutions ~80% of them game at, 1280x1024 or higher, so benching a bunch of games at 1024x768 and trying to pass off this new IGP as adequate tells me nothing as its not indicative of real world applications.
Also, I'd take this a step further and argue the vast majority of those buying one of these new Sandy Bridge processors and systems would opt for a much higher resolution than even 1280x1024, as the most common desktop resolutions available for purchase today are going to be wide aspect 1680x1050, 1920x1080, and 1920x1200 displays. When's the last time you were able to buy an OEM build with a 1024x768 native display or even a 4:3 or 5:4 display for that matter?
If Intel and AT want to pass this IGP off as an HD gaming solution to rival discrete solutions, bench some resolutions and settings people would actually expect to game at.
tatertot - Sunday, August 29, 2010 - link
No, the 10% average outperformance in this review (see the conclusion) is against the i7 880 which has been allowed to turbo.Anand uses "clock-for-clock" to distinguish that part from the "same price replacement" the i5 760.
So it achieves 10% average outperformance against a part that runs ~20% faster on single-threaded loads, ~15% faster on 2 threads... down to a bin or so of turboing on fully-threaded loads.
That puts the clock/clock performance improvement at around 20%, and this is not including AVX / hardware transcoding.
chizow - Sunday, August 29, 2010 - link
Yes the i7 880 is the basis for the clock-for-clock comparisons to come to 10% increase, with Turbo on SB he expects another 3-7% increase which is again, in-line with my estimate of 5-15% instead of 0-10% gain, clock-for-clock with and without Turbo.From the conclusion verbatim:
"Sandy Bridge seems to offer a 10% increase in performance. Keep in mind that this analysis was done without a functional turbo mode, so the shipping Sandy Bridge CPUs should be even quicker. I'd estimate you can add another 3 - 7% to these numbers for the final chips."
In almost all of the benches in the test, you are going to be limited to 1 or 2 Turbo bins max which is why Anand limited his estimates to 3-7%, because all of the tests will be using more than 1 core. Under the same tests the benefits of Turbo for both Lynnfield and SB are going to be the same assuming the final Turbo bins and throttling is also the same. So if Lynnfield only gets 1 bin at 2+ cores then SB would only get the same benefit, which is where I'm sure Anand based his estimates (100/3100 and 200/3100).
Simply put, a 15% or even 20% clock-for-clock increase after 2 years from a new architecture is underwhelming imo, especially considering everything else they've left out, but I guess this is what we've come to expect and be thrilled about in a market dominated by Intel without any real competition. Sorry, I'm just less than impressed, especially given the artificial restrictions Intel plans to place on overclocking, further reducing any IPC benefits from SB compared to Lynnfield.
seapeople - Sunday, August 29, 2010 - link
If you throw out Netburst, which was a significant decrease in IPC from Pentium III, when have we had significantly greater than 20% IPC increase within 2 years for an architecture? I understand your other complaints (although I don't see what's wrong with just buying the K models, which all indications suggest won't be much more expensive), but what were you really expecting in IPC increases? 40%? 60%?chizow - Sunday, August 29, 2010 - link
Netburst was a reduction in IPC but a tripling of clockspeed compared to P3, but surely you aren't forgetting the incredible gains in IPC from Netburst to Yonah (Core) and Conroe (Core 2)?Conroe effectively increased performance 100% clock-for-clock from P4 (or 50% or so from Yonah), as it offered some 50% better performance at 50% lower clockspeeds compared to Netburst. While I certainly don't expect that kind of revolutionary product every 2-3 years, we're not even close to that kind of gain in the 4-5 years since Conroe was introduced with not even that much aggregrate difference from Conroe/Penryn/Nehalem/Westmere to SB. From Conroe to SB, clock for clock, we're maybe looking at 50% improvement?
That's 2 full Tick-Tock cycles signaling Moore's Law is clearly dead to Intel when it comes to performance, they only loosely follow its cadence in terms of refreshes, die sizes, transistor counts and fab processes. In order to achieve those kinds of gains, they had to redesign their CPU from nearly the ground-up to compete with AMD, which had the performance lead at the time. Intel clearly hasn't felt the need to improve or innovate signfificantly since then as AMD is essentially 2 generations behind still in performance, about on par with their Penryn offerings at this point.
seapeople - Sunday, August 29, 2010 - link
So you're saying that integrated graphics should either be able to handle high resolution gaming using at least medium settings on the upper echelon of current games or they should not be included? That's fairly narrow minded. The bottom line is that most people will never need a better graphics card than SB provides, and the people who do are probably going to buy a $200+ graphics card anyway and replace it every summer, so are they really going to care if the integrated graphics drive the price of their $200 processor up by $10-20? Alternatively, this chip is begging for some sort of Optimus-like option, which will allow hardcore gamers to buy the graphics card they want, AND not have to chew up 100W of graphics power while browsing the web or watching a movie.Regardless, for people who aren't hard core gamers, the IGP on SB replaces the need to buy something like a Radeon HD 5450, ultimately saving them money. This seems like a positive step to me.
chizow - Sunday, August 29, 2010 - link
No, I'm saying if this is being advertised as a suitable discrete GPU replacement, it should be compared to discrete GPUs at resolutions and settings you would expect a discrete GPU to handle and not IGPs that we already know are too slow to matter. 1024x768 and all lowest settings doesn't fit that criteria. Flash and web-based games don't either, since they don't even require a 3D accelerator in order to run (Intel's workaround Broadcom chip would be fine).Again, this card wouldn't even hold a candle to a mid-range $200 GPU from 3 years ago, the 8800GT would still do cartwheels all over it. You can buy these cards for much less than $100, even the GT240 or 4850 for example have been selling for less than $50 after MIR and would be a much more capable gaming card.
Also, you're badly mistaken if you think this GPU is free by any means, as the cost to integrate a GPU onto SB's die comes at the expense of what could've been more actual CPU....so instead of better CPU performance this generation, you lose that for mediocre graphics performance. There is a price to pay for that relatively massive IGP whether you think so or not, you are paying for it.
wut - Sunday, August 29, 2010 - link
You don't know what you're talking about. You pretend that you do, but you don't.The telling sign is your comment about L2/L3 cache.
chizow - Sunday, August 29, 2010 - link
Actually it sounds like you don't know what you're talking about or you didn't read the article:"Only the Core i7 2600 has an 8MB L3 cache, the 2400, 2500 and 2600 have a 6MB L3 and the 2100 has a 3MB L3. The L3 size should matter more with Sandy Bridge due to the fact that it’s shared by the GPU in those cases where the integrated graphics is active. I am a bit puzzled why Intel strayed from the steadfast 2MB L3 per core Nehalem’s lead architect wanted to commit to. I guess I’ll find out more from him at IDF :)"
You might've missed it very clearly stated in the tables also that only the 2600 has the same 8MB L3 or 2MB per core with previous 4C like Bloomfield/Lynnfield/Westmere/Clarkdale. The rest have 6MB or 3MB, which is less than 8MB or 4MB L3 used on the previous generation chips.
This may change with the high-end/enthusiast platform, but again, the amount of L3 cache is actually going to be a downgrade on many of these Sandy Bridge SKUs for anyone who already owns a Nehalem/Westmere based CPU.
wut - Friday, September 10, 2010 - link
You're parroting Anand and his purely number-based guess. Stop pretending.mac2j - Saturday, August 28, 2010 - link
990x is a Gulftown part on 1366 that's 130MHz faster than the 980x.... will cost $1000 and come out the same time as the 2600 (which will cost ~ 1/2 and deliver 90% of the performance) and at most a couple months before the i7-2800K which will cost less and trounce it performance-wise.You'd have to REALLY want those extra cores to buy a 990x on a lame-duck socket at that point!
wut - Sunday, August 29, 2010 - link
Some has to get those chips to populate the uppermost echelons 3DMark score boards. It's an expensive hobby.hybrid2d4x4 - Saturday, August 28, 2010 - link
Anand, can you provide some more info on what the system configuration was when running the power tests? The test setup lists 2 vid cards and it's not clear which was used when deriving the power graphs. Also, what PSU was used?Just wondering since if it was a 1200W behemoth, then the 63W idle might really be 30W on a more reasonable PSU (assuming no vid cards)...
As always, thanks for the article!
smilingcrow - Saturday, August 28, 2010 - link
Was HT enabled for the power tests and what application was used to load the cores?semo - Saturday, August 28, 2010 - link
No USB3.0 support and a half baked SATA3 implementation. I could be a bit too harsh about the latter (can't say if SATA3 on a 6 series chipset will perform poorly or not) but why are they going with only 2 6Gb/s ports? I understand that most people are likely to be buying only 1 or so SSDs in the near future but what about in a few years when these things become mainstream? At least AMD took SATA3 seriously even if they couldn't quite make it work initially (we need a follow up on the 8 series chipsets' SATA performance!)Not only are Intel overlooking advance in technologies other than CPUs (which are important to most consumers, whether they are aware of it or not) but are also denying other companies who might have more focus in those areas. I wonder if Nvidia or someone else bother to release a chipset for Intel's latest and greatest.
wut - Sunday, August 29, 2010 - link
Yep. I bet AMD is really wondering about that right now.greenguy - Saturday, August 28, 2010 - link
I bought basically what is an i5-750 based on Anand's review here. Or at least, the Xeon version with hyperthreading (needed ECC RAM).From what I can tell, you get about a 20%-30% improvement over the i5-750, with the same power consumption. That's pretty good. Not only that, you get some competent entry level graphics... which would have good open source drivers. That's somewhat exciting, though I wonder whether it would do multiple monitors. Any idea on that Anand?
Maybe I'll just stick to the cheapest Nvidia discrete cards I can buy, a couple G210s do the trick (to get 4 1920x1200 monitors). Unless Intel can make those G210s redundant, it represents just an incremental bump in performance, as the only thing that is of interest is the increase in CPU. One thing that is nice is that Intel is reputed to have the best open source support for their GPU drivers, which makes things really interesting now they are producing stuff that will compete with the entry level discrete market. It could be really good for Linux/BSD people like myself.
The other thing of interest for me is in the low power, low cost, high numbers of SATA connections space, with ECC. I wonder if Bobcat will have something there, as AMD don't seek to arbitrarily differentiate their markets like Intel does with the ECC RAM.
Also not really sure what the big thing is with the motherboards and same CPU. I tend to keep the same computer as a build. By the time you want to upgrade the CPU, there is invariably other stuff that needs upgrading, e.g. USB3, graphics, SATA, RAM, whatever. So you end up wasting the old parts for not that much benefit. Better to just re-purpose the old machine, and when you have enough money, buy the most performant parts that are still good bang for buck. A good example was the i5-750 about 8 months ago or so. So I don't fault Intel for this.
wut - Sunday, August 29, 2010 - link
Intel's going after the mid range market, where most of the money is. We'll have to wait and see how good AMD's Fusion mid range ends up being. Even if it catches up all the way and achieves performance parity so AMD can make more money by raising prices, Intel would have their newest gen on the market first. Fusion had better be really, really, good...Hrel - Sunday, August 29, 2010 - link
I'd say if that 2500K is 215 or less it'd be a fair buy. I'd still wait for the price to drop below 200 cause that's my absolute cap on a CPU. I am a little annoyed that it doesn't have hyper threading though, from a moral standpoint, I mean, from a raw material standpoint how much does adding hyperthreading cost? nothing! yeah, that's what I thought.Hrel - Sunday, August 29, 2010 - link
Those are some impressive integrated graphics. I've thought this for a while now, but we really don't need a card any lower than the HD5670, and maybe the 5650 in discrete graphics. Prefferably just the 5670 though. If mobo makers start setting aside a single DDR3 slot for the integrated GPU to use and dedicated GPU only memory, like a discrete GPU, so the integrated GPU doesn't have to share system RAM we really won't need low end graphics in laptops at all anymore.mino - Sunday, August 29, 2010 - link
Adequate.5450 is the LOWEST END card from 2008, A facelifted 4350.
And 780G, it FINALLY manages to out-pace, in 2011, is the mainstream part of 2008 too.
In 2011 there will be a 10W Ontario with 5450-class GPU on 40nm bulk ...
On the other hand it seems Intel is thaking the GPU side seriously. Finally.
But they are still where ATI/NV were in 2004 ...
LuckyKnight - Sunday, August 29, 2010 - link
I would have liked to have seen a better comparison when it comes to idle power consumption? How much has it improved since moving to a 45nm->32nm GPU?Also, has Intel addressed the Clarkdale issue of not outputting industry standard 24fps? (23.976 hz)
Miggleness - Sunday, August 29, 2010 - link
I was planning on purchasing an i5-760 in 2 weeks but looks like I'll have to settle for 2nd hand, low end parts instead and wait for the i5-2400's release.Great job Intel. I for one no longer have that much interest in overclocking when I have Turbo boost to compensate for that.
Hope we hear about the official pricing soon.
jfelano - Sunday, August 29, 2010 - link
Intel could have hit one out of the park with this one if it worked on existing s1155 motherboards....unfortunately it doesn't and it screws over everyone who bought into s1156 or s1366.....yet again.siberian 3 - Sunday, August 29, 2010 - link
Hi everyone i've read the preview and i am not so impresed by the performance of SB.The IGPis great but it make sense only for the mobile section of the pc not the desktop!!
From the preview i understand that this is not a real fusion product but an evolution off clarkdale and arandale products.
So i will wait for Llano to see what Amd has to offer!!!
iwodo - Sunday, August 29, 2010 - link
The GPU is on the same die, So depending on what you mean by true "Fusion" product. It is by AMD's definition ( the creator of the tech terms "Fusion" ) a fusion product.iwodo - Sunday, August 29, 2010 - link
You get 10% of IPC on average. It varies widely from 5 % to ~30% clock per clock.None of these Test have had AVX coded. I am not sure if you need to recompile to take advantage of the additional width for faster SSE Code. ( I am thinking such changes in coding of instruction should require one. ) AVX should offer some more improvement in many areas.
So much performance is here with even less Peak Power usage. If you factor in the Turbo Mode, Sandy Bridge actually give you a huge boost in Performance / Watts!!!
So i dont understand why people are complaining.
yuhong - Sunday, August 29, 2010 - link
Yes AVX requires software changes, as well as OS support for using XSAVE to save AVX state.BD2003 - Sunday, August 29, 2010 - link
It sounds like intel has a home run here. At least for my needs. Right now I'm running entirely on core 2 chips, but I can definitely find a use for all these.For my main/gaming desktop, the quad core i5s seem like theyll be the first upgrade thats big enough to move me away from my e6300 from 4 years ago.
For my HTPC, the integrated graphics seem like theyre getting to a point where I can move past my e2180 + 9400 IGP. I need at least some 3d graphics, and the current i3/i5 don't cut it. Even lower power consumption + faster CPU, all in a presumably smaller package - win.
For my home server, I'd love to put the lowest end i3 in there for great idle power consumption but with the speed to make things happen when it needs to. I'd been contemplating throwing in a quad core, but if the on-die video transcoding engine is legitimate there will be no need for that.
Thats still my main unanswered question: what's the deal with the video encoder/transcoder? Does it require explicit software support, or is it compatible with anything that's already out there? I'm mainly interested in real time streaming over LAN/internet to devices such as an ipad or even a laptop - if it can put out good quality 720-1080p h264 at decent bitrates in real time, especially on a low end chip, I'll be absolutely blown away. Any more info on this?
_Q_ - Sunday, August 29, 2010 - link
I do understand some complains, but Intel is running a business and so they do what is in their best interest.Yet, concerning USB 3 it seems to be too much of a disservice to the costumers that it should be in, without any third party add-on chip!
I think it is shameful of them to delay this further just so that they can get their LightPeak thing into to the market. Of which I read nothing in this review so I wonder, when will even that one come?!
I can only hope AMD does support it (haven't read about it) and they start getting more market, maybe that will show these near sighted Intel guys.
tatertot - Sunday, August 29, 2010 - link
Lightpeak would be chipset functionality, at least at first.Also, lightpeak is not a protocol, it's protocol-agnostic, and can in fact carry USB 3.0.
But, rant away if you want...
Guimar - Sunday, August 29, 2010 - link
Really need oneTriple Omega - Sunday, August 29, 2010 - link
I'm really interested to see how Intel is going to price the higher of these new CPU's, as there are several hurdles:1) The non-K's are going up against highly overclockable 1366 and 1156 parts. So pricing the K-models too high could mean trouble.
2) The LGA-1356 platform housing the new consumer high-end(LGA-2011 will be server-only) will also arrive later in 2011. Since these are expected to have up to 8 cores, pricing the higher 1155 CPU's too high will force a massive price-drop when 1356 arrives.(Or the P67 platform will collapse.) And 1366 has shown that such a high-end platform needs the equivalent of an i7 920 to be successful. So pricing the 2600K @ $500 seems impossible. Even $300 would not leave room for a $300 1356 part as that will, with 6-8 cores, easily outperform the 2600K.
It will also be quite interesting to see the development of those limits on overclocking when 1356 comes out. As imposing limits there too, could make the entire platform fail.(OCed 2600K better then 6-core 1356 CPU for example.) And of course AMD's response to all this. Will they profit from the overclocking limits of Intel? Will they grab back some high-end? Will they force Intel to change their pricing on 1155/1356?
@Anand:
It would be nice to see another PCIe 2.0 x8 SLI/CF bottleneck test with the new HD 6xxx series when the time comes. I'm interested to see if the GPU's will catch up with Intels limited platform choice.
thewhat - Sunday, August 29, 2010 - link
I'm disappointed that you didn't test it against 1366 quads. The triple channel memory and a more powerful platform in general have a significant advantage over 1156, so a lot of us are looking at those CPUs. Especially since the i7 950 is about to have its price reduced.A $1000 six-core 980X doesn't really fit in there, since it's at a totally different price point.
I was all for the 1366 as my next upgrade, but the low power consumption of Sandy Bridge looks very promising in terms of silent computing (less heat).
SteelCity1981 - Sunday, August 29, 2010 - link
What do you think the Core i7 980x uses? An LGA 1366 socket with triple channel memory support. So what makes you think that the Core i7 950 is going to perform any diff?????thewhat - Monday, August 30, 2010 - link
Because the 980X is a 6 core and the 950 is a 4 core!It doesn't make sense to compare a 6 core to a 4 core when there's an $800 price difference.
A 1366 4 core (preferably at the same CPU speed) would make much more sense to see the differences in various architectures/sockets.
SteelCity1981 - Monday, August 30, 2010 - link
ok and the diff in performance would be what now? if they are showing you the diff and how well the new 2 gen cpu's are to even a $1000 cpu what makes you think that the Core i7 950 which is slower in performance then a 980X would fair? I mean it's common logic that if the 2nd gens can run almost on par in many bench test with a 980X then obv it's going to run better then the Core i7 950's.kake - Sunday, August 29, 2010 - link
Damn you Intel! Damn you to hell!!I have been living with an AMD Athlon XP 1800+ since 2003ish. This was mostly because I liked the Soundstorm that did a very good Dolby Digital Live output. For the last eight months I've been having to run it at about 2/3rds speed because all the caps on the motherboard burst, and it ran at 80C all the time. The GPU fan died and I wired a 80mm fan on top of it, but it had overheated once too often to do any 3D work. The DVD burner wouldn't read or write, the DVD reader wouldn't open except under duress. The SATA bus started to scramble any data read or written through it, the second LAN port (the good one) died, and the USB would usually demand a musical chairs routine with the mouse and keyboard to get them to work.
So last week I bought all the bits and built a very reasonably priced (370 with shipping and tax) i3-530 based HTPC. I've never seen anything so gorgeous as the first time I played Avatar in 1080p on the plasma.
And now you tell me all this?
Damn you Intel, I'm sick of progress.
juampavalverde - Sunday, August 29, 2010 - link
There is a technological reason to bury 2 sockets that are still alive? they are screaming performance yet! i dont get what intel wants with this behavior, ¿Hate from the IT sector? i love the performance, but it is designed in a so closed and trickery way, and completely dropping two nice and stablished platforms, this thing wants to be hated. I hope amd destroys this crazy ideas of intel with llano oem sales, even being inferior in cpu performance.Googer - Monday, August 30, 2010 - link
What exactly does a Lifestyle processor do?mino - Monday, August 30, 2010 - link
Probably cuts your hair while playing Crisis :)Googer - Monday, August 30, 2010 - link
More like it shops for a convertible for you while you worry about your hair loss in your mid life Crysis.Googer - Monday, August 30, 2010 - link
More like it shops for a convertible for you while you worry about your hair loss in your mid life Crysis.Googer - Monday, August 30, 2010 - link
I understand what the difference between unlocked, regular, and power saving CPUs are. But what exactly does Intel mean by a Lifestyle processor? How is it different from the others? What exactly is a "Lifestyle CPU"?zepi - Monday, August 30, 2010 - link
What does the mystical "load power" mean? Does it mean running Prime95, Furmark, both or even something "real world" like Starcraft 2?Mithan - Tuesday, August 31, 2010 - link
I will be buying one of these the day it comes out.The only question will be between whether I get a CoreI5 or the Corei7. It will depend on price I guess, as the max I am willing to spend on a i7 CPU is $250.
Anyways, should be a nice upgrade to my E8400.
starfalcon - Tuesday, August 31, 2010 - link
Considering how great of a quad core the Core i5-750 is at $195, hopefully they'll have some great quad cores at about $200.Sabresiberian - Tuesday, August 31, 2010 - link
I've often wondered why people don't use WoW to test their video performance in the computers they are testing, and the obvious occurred to me - it so much depends on where you are and what the population is in the area you are in, that the frame rates vary widely. I imagine the frame rates reported here were for an area like Durotar with no one else in sight, heh. It would be a good place in terms of consistency, anyway, though less taxing that somewhere in Storm Peaks.WoW is often described as a CPU-intensive game, and so a great game to be included in tests of CPUs like you are doing here. Thanks for including it! I hope it is used for more video card tests as well; WoW may not be the most taxing test bed at lower end video, but at upper end in some areas it can hit 4 GHz i7 based Crossfired systems hard. I like playing at 85 Hz everywhere in the WoW universe I go - and Cataclysm will bring new video challenges, I'm sure.
drunkenrobot - Tuesday, August 31, 2010 - link
I'm a bit disappointed at Intel's attempt to completely lock us out of over clocking all together. But maybe this is AMD's chance to win back the enthusiast market. If AMD sold only unlocked parts, they would have a market segment all to themselves...theangryintern - Wednesday, September 1, 2010 - link
OK, didn't see it in the article and don't really feel like wading through 200 comments. What I want to know is will we be able to either A) disable the onboard graphics if we have the latest and greatest bad-ass video card...or even better, B) Will it be able to run both at the same time in a configuration where when I'm doing just generic web surfing, emailing, etc, the Intel GPU is doing the work and the discrete card can power down (quieter and less heat generated), and then when I fire up a game, the discrete powers up and the onboard powers down?JonnyDough - Thursday, September 2, 2010 - link
Intel is screwing over minorities! Colorblind people unite!"Both H67 and P67 support 6Gbps SATA, however only on two ports. The remaining 4 SATA ports are 3Gbps. Motherboard manufacturers will color the 6Gbps ports differently to differentiate."
JonnyDough - Thursday, September 2, 2010 - link
Higher performance integrated GPU's should help bring some of the gaming market back to the PC. That is a very good thing. :)starx5 - Tuesday, September 7, 2010 - link
I'm sorry anand but is this because your intel frendly?Come on..you have to run high resolution(2560x1600 or higher eyefinity) gaming benchmark too.
Sandbridge is nothing if it doesnt have much supiror performance in high resolution gamming.
But I know intel sucks. Even 980X is sometimes sucks in high resolution gaming.
When I see your bench, I can clearly SEE your intel frendly. Espesilly in gaming part.
Anand, of course your site is very popular(even in my country korea).
But in reality..your nothing but a intel suckass indian.
wut - Friday, September 10, 2010 - link
Stop. You're making yourself look like a bigoted fool.mekdonel - Friday, November 5, 2010 - link
Naaah, you're not a Korean. Only Americans make dumb spelling mistakes like "your" in place of "you're".starx5 - Tuesday, September 7, 2010 - link
And why didn't you ran 2560x1600(or higher resolution like eyefinity) benchmark either?Is this because sandybrige is not that good?
wut - Friday, September 10, 2010 - link
So you're expecting eyeinfinity out of a single integrated graphics connection out the back of a motherboard?Are you okay?
gundersausage - Tuesday, October 26, 2010 - link
i7-950 vs i7-2500K... So which will be faster and a better gaming chip? anyone?WillyMcNilly - Thursday, October 28, 2010 - link
Fantastic preview! I am definetly getting sandy bridge now. Apparently the Gigabyte P67-UD7 will have a geforce n200 chipset and support full 16X/16X sli AND crossfire! It will make a significant upgrade from my Phenom 2 and I cannot see myself waiting for bulldozer which has apparently been delayed (gee what a surprise!) until Q4 2011.Chrisch - Wednesday, November 24, 2010 - link
which sample did you use for your tests?QDF Q12W = GT1 (850-1100MHz)
QDF Q12X = GT2 (850-1100MHz)
techeadtrevor - Thursday, December 30, 2010 - link
Hey guys, checkout this review of the i7-2600k... I think its bogus...tell me what you think of it on here.( http://en.inpai.com.cn/doc/enshowcont.asp?id=7944 )
psiboy - Sunday, January 2, 2011 - link
Catalyst 8.12... WTF! 2 year old drivers? How much did intel bribe you to use drivers that old for their competition? That is a really bad path to guy down... Tom's did weird stuff like that a while back and lost readers because of it.... You just lost my respect Anand....kmidm - Thursday, January 6, 2011 - link
I don't think an entire product line of CPU's with on-board graphics is anything really to get excited about, especially for us geeks. I guess I'm just old-school. The Sandy Bridge ,like Clarksdale, has similar benefits from a single-chip chipset which is very appealing from a throughput and control standpoint.katleo123 - Tuesday, February 1, 2011 - link
Take nother look at Sandy bridgevisit http://www.techreign.com/2010/12/intels-sandy-brid...
hapeid - Friday, March 10, 2017 - link
Wow Intel owns when it came to converting video, beating out much faster dedicated solutions, which was strange but still awesome.I don't know how AMD's going to fare but i hope their new architecture will at least compete with these CPU's, because for a few years now AMD has been at least a generation worth of speed behind Intel.
Also Intel's IGP's are finally gaining some ground in the games department.