$137 is the launch price of the A10-7870K. $173 was the launch price of the A10-7850K.
We mentioned why we do launch pricing in our graphs in previous reviews, but it comes down to our graphs not being dynamically linked to a retailer and we have to pick a point that's suitable over time. Launch pricing does that, even though there might be future discounts over time.
Business opportunity: add a drop down to switch from launch price to "live price" for Newegg/Amazon/etc., hyperlink live prices, get a cut from every click.
I agree with nand. Other sites do this, and it's helpful to me because I often read reviews to make a purchase anyway, and it helps me see what the price is right now.
Take a look at how much the GTX980 stomps the amd 290X above - in the review here where they aren't concerned and paying attention and picking the best games for amd gimpy hardware.
Just look at the FPS difference... let it sin in - the reviewers haphazardly reveal the truth when they are not intending to.
Oh look, a current gen video card beat a last gen card that is just over half the price.
You may want to try waiting until the 300 series is released/ bench marked before spouting how superior one is over the other. If the 300 series ends up being a dud (which it kinda looks like it will), so be it; at least it will be an 'apple to apple' comparison.
You don't need dynamically updated prices. simply pick the prices around the release date of the new hardware. If the price of a 1 or 2 years old comparison chip was lowered significantly (yes, this still happens sometimes), comparing it based on launch price is misleading and will always make some people shout "unfair". You easily fix this.
I agree, the obvious thing to do is use the price as it is during the time the article was written. That means launch price for the item being reviewed and current prices for all items it is being compared to
Since you have to do a "snapshot" pricing for these reviews, you may want to consider looking at average pricing at the time of review. At least then the time frame for each snap shot is the same.
Simply use something representative of the current situation. Not old numbers which may be totally meaningless today. In Europe we have simple price comparison engines like geizhals.eu where it's really easy to find typical prices. Just exclude obvious outliers and cheap products not yet deliverable and voila, you've got the current street price.
There are tons of price comparison websites, but why should AT take the time and be responsible for tracking down pricing when we can all do that in our own areas or from our favorite retailers? They don't know where I like to shop or what currency I use, so it seems like a waste of their time. I'd much rather get quality performance analyses on a more regular basis than have them worry about updating price indexes. Who knows? Maybe their new parent company can do that for them off to the side or something?
I never see people from the US referring to them. They seem to just use Newegg or Amazon. And getting prices for European shops won't do AT any good, as they're mainly writing for the US.
And AT has to look up the launch prices of the hardware anyway when they write a review. How much more time would it take to instead look up the current price in a comparison engine? Again, I'm not talking about updating prices dynamically.
I they're not doing this you always get comments like "who in their right mind would buy x? Vendor y must be totally stupid to keep offering this". Simply because the launch price is far higher than the current price.
Why should AT take the time, you asked? Well, why should they list a price at all then? Hell, why provide any information to the reader beyond specs and performance metrics? Why even talk about the market at all? There is obviously value to be had in doing so and it comes down to where you draw the line. Whereas you would rather they not 'waste their time', I would rather not waste mine by having to question whether a product is being valued near current market price or some esoteric launch price, and then being forced to google multiple things myself to get a feel for 'real world' prices. No, the price they come to will likely not tell me the exact total of the item in question at whatever preferred retailer a reader happens to use, but it will probably be within 5-10% of that value, which is highly useful for at-a-glance comparisons and understanding where something sits in a market segment.
We're not talking about a large amount of effort here, but I'd rather AT do it than myself; it's less effort for them and it benefits a large portion of their readers. Simply provide a disclaimer about the methods used to gather the data as well as a liability warning, and everyone's happy.
There are official price cuts and so you have new OFFICIAL pricing that it is fair to put in the charts. If the launch price of 7850K was $500 and the last official price was $100, wouldn't be the above charts completely misleading? People do have a price/performance ratio in their minds when looking at these charts. In fact for 90% that's the biggest parameter when looking at those charts. And those charts LIE exactly there.
intel is sitting on it´s laurels because there is no competition. AMD is no competition for poweruser.
users want broadwell..mhm.. aha....
i am the only one who wants intel to make a fast 6 (or better 8) core system without integrated graphics, that runs not to hot and does not cost a fortune?
can´t we have such a CPU for 500-600 euro?
ok then i stick to my OC ivy bridge for a few more years.. and you intel can complain about declining sales.
I would love to see a quad core clocked higher (or with better IPC). I don't use highly parallel software that much and I have no use for an integrated GPU. Sadly, with no competition, Intel has just been pumping the GPU for 4 generations in a row. Not that I blame them.
They could still raise the clock speed. Intel CPUs could hit 4GHz on air for years. Even the top-level Haswell can do it, but it's 4 core+HT part. If I could replace my i5-2500k with a 4GHz quad-core, that would be good enough for me. Removing the integrated GPU from the equation would yield even more thermal legroom for the CPU. But it's not happening. With CPU performance securely in their hands, Intel is trying to secure positions in GPU and mobile markets (they'd be crazy not to at least try to diversify).
Yet raising the clock speed conflicts with higher IPC. Because raising clock speed needs a longer pipeline and a longer pipeline means taking a more serious hit for branch mispredictions. AMD has managed to seriously raise IPC with AthlonXP. Intel did it with their Core architecture. And nothing happened ever since. Because there's no more pressure on Intel and AMD doesn't have the cash to invest anymore.
"IPC has been steadily improving. Core 2, and anything from AMD, are far behind at this point."
Not really. If you look here: http://www.anandtech.com/show/7003/the-haswell-rev... You'll see there's usually less than 10% gained (watch the i7 3.5GHz parts). And that's spread over three generations. Broadwell bring another 2-3%. And yes, AMD has been playing the same game, only they're stuck in their Athlon64 days.
Dude, Intel is offering exactly this with i7 4790K. A really highly clocked quad core. Use your external GPU and the iGPU won't consume any power nor limit clock speed. In fact it makes the die larger and thus helps cooling a bit.
Oh please. What a garbage excuse. "It's hard"? Intel's more than doubled IPC/Watt in their mobile chips over the last 3-4 years. They just don't care about high end IPC because they have no competition.
That is what the XEON offers a better price and value than the X58xx line if you don't need unlocked performance. You can buy many XEON 6 cores cheaper than the X58xx line.
Yes I am shouting that for a month now, but of course I was treated as an AMD brainless fanboy when pointing at that, so I must be wrong. Now we have a chart that it is misleading and hilarious at the same time. If Ian had done what is right and logical, those charts would have been informative, correct and fair. I think AMD created 7870K just to troll Ian's charts.
It would be really nice if you would note the frequency each processor is running during each test. With all the turbos these days it's hard to know and therefore hard to make IPC comparisons.
Yes, please. Once upon a time, I would have had the clocks for all the models memorized, but without that comparing IPC is difficult to impossible without the clocks noted.
You state in the opening that if you upgrade on a 3 year cycle, you would be coming from SandyBridge. Would it not make sense to have some older Intel processors in the graphs?
With a recent new testing suite, we haven't gone back through enough generations yet with the new benchmarking scheme. You can still check legacy benchmarks (Cinebench 10 / POV-Ray) between old and new in Bench. www.anandtech.com/bench
Yes. One thing Anand did amazingly well was review the new Intel processors. Clock-normalized IPC comparisons from previous generations is really what we want to see. As well as normalized power consumption, ie energy used completing the same workload.
As a normal home PC user/gamer, I couldn't care less about small differences in power consumption. Sure, 100 Watts might matter, but what matters most to me is outright performance.
As a normal PC user/gamer, that's all I care about. Less heat and thus less noise. Having a passively cooled PC that never goes above 65C under load is pure bliss.
Less heat and less noise is nice. More FPS is better. When someone can build a silent system that can keep up with an i7 processor and GTX 970 video card, let me know. In the meantime, I won't compromise. Some fan noise is a small price to pay for a more immersive and detailed gaming experience. Just put on your headphones.
When AMD inefficiency costs an $80 bill, skip lunch it's all good...
When an Intel or nVidia product is $8 more let alone $80, that settles the whole matter completely in AMD's favor, and proves once again AMD is the best bang for the buck....
That's how AMD fanboys play it.
I guess despite all the sickening propaganda of the amd fans, no one is listening nor buying it. AMD is dying and nearly dead, market share and share prices...
What the AMD fanboys forgot is no one else likes being lied to, nor told what to do.
as a gamer in florida, I care very much... gaming in the summer the heat produced is enormous and the central A/C is running overtime trying to keep temps down
ah that reads wrong... i know that haswell-e has no internal graphics. what i meant was... haswell-e runs hot and for the other cpus like broadwell i have no use of the internall GPU.
All this wishful thinking on your part tells me you probably don't have a very strong, actual NEED. Because if you did, you'd be ecstatic about the 5820K or 5960X. They run at 4GHz all day long with standard air cooling and no knowledge of overclocking (I just use ASRock's SLOWEST default overclock settings in the BIOS setup).
It's almost humorous when I read these benchmarks, because they so understate the true, completely stable performance of the 5960X without doing any extra work setting it up. It's nearly twice as fast as the 4790K in practice (in fact it *is* about twice as fast when you consider how easily and quickly the 4790K throttles in a typical configuration).
I realize there are situations where generating more heat in the room matters. But if your multitasking were genuinely heavy enough to warrant the upgrade to 6 or 8 cores, you'd be foolish not to take advantage of the great solution that's available right now.
Double the cores means double the power consuption. There's no way around this, except making each core run slower.. which diminishes the benefit of having them. If Haswell-E runs too hot for you, no other chip Intel would reasonably want to produce right now would satisfy you.
Why would you want to invest in 3 sticks of DDR4 RAM which will bring nearly no performance benefit and a massively overpriced motherboard for an extra couple of cores? It's not the cost of the Haswell-Es, or even their power consumption that bugs me - I suspect dual channel DDR3 is perfectly adequate for 6 cores alone given that it is apparently adequate for 4+GPU.
Still, it makes my investment in a 2500K which has been running at 4.5GHz on stock voltage for the last 3 and a half years on the cheapest Z68 board I could buy sound rather good. I think I'd struggle to gain more than 20% in performance for the outlay of buying a new motherboard and CPU. I built a 4770K-based rig for a friend last year, it won't scrape past 4.2GHz. I'm sure IPC + HT + Faster RAM makes it faster than my rig, but certainly not in any noticeable fashion.
Obviously it wouldn't matter to you. You didn't even bother to go with an i7.
But for those to whom it does matter it's the biggest step up we've seen in a desktop CPU in a long time. The additional RAM and motherboard cost is trivial to those for whom the extra performance provides more than amusement.
You're correct - to me, it seems adequate for video editing and running Android builds... I'm sure in your 'elite' i7 world, things are far quicker. My issue is not with the fact that Intel offers higher spec or more expensive parts. Obviously, there would be no need for 18-core Xeons in this hypothetical world. My issue the fact that they artificially constrain the 'mid' range dual channel architecture to 4 cores and lump us with an IGP that's never going to get used taking up more die space than the extra couple of cores would just because they don't have any competition.
And by the way, a ~84W TDP CPU is going to throttle when overclocked - especially when running AVX loads. If you haven't found the current limits in the BIOS, you probably deserve to believe your expensive 6-core running at only 4GHz is twice as fast as a properly set up 4790K :P
It's hardly elite - just live video. But if the CPU saturates or crashes, everybody will know.
I'm not skilled at overclocking. But for this application, stability is paramount. I wouldn't want to push it close to the limit, only to have it crash 5 hours into the stream. When I tested 4-core CPUs I had to run at low resolutions, or else the video would stutter. And I even at that I couldn't use all the features of the application.
With the same low-profile case and the same heatsink/fan the 4790K would throttle in less than 30 minutes while the 5820K and 5960X never would. They were all running at 4GHz. I'm sure someone more skilled could make all 3 CPUs run faster. But the 8-core is still going to be about twice as fast as the 4-core.
On the topic of the new CPUs, I won't be surprised if they O/C well enough to make them dominate the previous generations. But they won't be a match for the 2011-3 CPUs.
If you're into server workloads, 8 cores would nice. For gaming, more cores only adds more heat and hardly any benefit to gaming performance. For gaming, quad-core is good enough with no integrated GPU. All this integrated GPU stuff is just a waste of resources. Most of us into performance and gaming don't want the extra heat and power consumption.
Would like to see some Quicksync Benchmarks! Anand noted fast Quicksync performance on the Iris Pro 5200, so I sure would like to see a) the differences between GT3e (Haswell) and GT3e (Broadwell) and b) the differences between GT2 (Haswell) and GT3e (Broadwell).
I'll second the request for QuickSync testing. I do a bit of video editing with Vegas and do some bulk conversions with Handbrake so I have a keen interest on this topic.
I'd like to see the 65w 4770R in the charts to measure the delta over a a generation (okay half a generation). Meanwhile, considering that i5-5675C is $276 but games much worse than a Haswell Pentium + a decent $200 discrete card I really don't see the point in traditional desktops. The only scenario I see fit would be in a tiny case using a super low profile thin mini-ITX board, so as to create a system similar to, say, an Alienware Alpha. But wait, there aren't any thin mini-ITX H or Z97 boards. Whereas back with Haswell we had thin mini boards but only non-Iris iGPUs. Oh Intel, there's always a part missing!
I wouldn't purchase one of these with those intentions to be honest. They are DX11.5 not 12, and we've yet to see how well DX12 makes all the dGPU's and iGPU's play yet in the real world.
But I also can't afford to be that early adopter anymore either.
Ian, a point for the OC review: Broadwell-C is listed as only supporting DDR3L-1600. You even underclock your memory for the stock review. What about higher memory speeds and voltages? Is it as painless as with older K series CPUs? The fat iGPU can certainly use more bandwidth despite having Crystal Well. And anyone profiting from Crystal Well as CPU cache could also use more bandwidth. Einstein@Home is a prime example for this.
I don't disagree, yet your comment seems oddly out of place under the review of 2 chips with features we have never seen combined before:
14 nm Broadwell (energy efficient, better IPC than Haswell) overclockable (the stock speeds are far too low, yet it already sometimes beats or ties the mighty i7 4790) Crystal Well (it's going to rock in some applications) twice as much GPU power than ever before in a socketed configuration (it's going to be a fine OpenCL 2.0 number cruncher for some use cases).
Anyone know a US Retailer with the i7-5775C in stock? I have everything else ready for my build, Motherboard, RAM, Gfx Card, etc. Just need the CPU and I have been patiently waiting for these.
Good , very good . Now after they have succeeded in this exercise, that is , showing they can build decent igpu, they can finally proceed to the real goal. Make a little bit weaker processors with the same igpu for real laptops, where portability but performance as well are needed at the same time. Then, throw away the burden of the i-gpu part for real desktop parts and complete the desktop series with 80 100 and 130 watts models.
Trying to persuade customers that from now on this is the high end is not going to work. The desktop or desktop like derivatives should be at the front end of new computer challenges, none of which is what we describe today as consumer needs(with the exception of gaming of course which admit it or not will always be one of the most demanding software). Consider that pattern recognition(be it sound video or logic-language) will demand a lot of and parallel processing. Trying to push it as a network based service will not succeed at least as it is envisioned right now. A significant part of it should be done locally. Now take this advice and conquer or abandon it and fail lol...
I'm not sure that they've proven they can build a decent iGPU. The whole point of integrated graphics is to provide an affordable solution with 'good enough' performance. While they have certainly hit the 'good enough' performance from both a CPU and GPU point of view, they've entirely missed the 'affordable' part of things. For the price of a single Intel CPU, you can almost get an AMD -system- which (while about 10% slower on average) is no small matter.
Truth is (as pointed out) you can get an AMD chip with a $100 discrete graphics card which would blow either of the iGPUs away.
That's the issue both AMD and Intel have had with respect to their iGPUs. Once you've dedicated enough of the die to GPU transistors and skirted around the memory bottleneck (at least for Intel), you end up with a product that, while offering decent GPU performance, is horrendously overpriced for what it offers. Outside of niche scenarios, these things just don't make much sense given just how performance even cheap discrete GPUs can offer.
I'd like to see what Intel's margins are on these parts and how much the eDRAM adds to the overall cost of the product. If yields are good and prices aren't much higher, I'd love to see Intel 'bite the bullet' in order to replace the entire product portfolio top-to-bottom. They'd benefit massively from it with respect to market share and being taken seriously with their gaming endeavor.
Unfortunately, I don't think Intel is willing to eat lower margins and a dip in profits for a long term gain, even if it would see a bucking of the current dip in sales with uptick in systems sold.
But of course they wont. Why lower prices, make less money now, kill AMD more, and then make more money later possibly if you don't lose out on some BS legal battle over having no compitition. Because it will happen, you know it will.
When instead you can continue to enjoy your large margins with minimal griping because the blame is being split between Intel's greed and AMD's inability to compete. Watch them die slowly, make more money while doing it. They control this playing field, why rock the boat when it is more profitable than it should be already?
And an Intel chip too, I wonder who are these people who'll pay $276-366 for something that'll get totally trashed by a $64 G3260 + $2-300 discrete graphics card. I guess the target market is AIOs that value style and low power consumption over performance and cost. If you look at their laptop prices, they're the same. So I'm guessing Intel didn't want manufacturers underclocking the 65W chips and using them as cheap 47W laptop chips. And the AIO makers wouldn't like Intel releasing a socketed chip cheaper than the identical BGA chip so the price is set with no grounding to reality in the desktop market.
Majority of people can use G3260 + a decent vid card and have no clue it's a cheap underclass cpu. i5 and i7 are just marketing ploy for clueless people to buy the top level even though they don't know where the performance bottlenecks really are.
You know, two of my brothers have i5 CPUs, and my i7 utterly trounces them, in every way. I was expecting the difference to be small, but oh no, for example, one brothers' 3230M does wPrime in < 19s, my i7 < 8s.
This speed difference is reflected in the way the PCs 'feels' in every way. I'd say i7's are worth the money, despite seeing some i5s offer up great benchmarks online...
If gaming, then yes, you need an actual decent gpu, but for everything else, CPU counts most...
Are you joking? 3230M? Obviously in laptops the story is different. In desktop the i5 and i7 are comparable, as the i7's main advantage is only the HT, but they still got 4 physical cores all the same. Laptop i5s are crippled processors. If intel would offer an i5 with 4 phsyical cores at a reasonable price - i'd buy it instantly. But 2+2 cores vs 4+4 in laptop i7 is hella difference. Obviously the processor can only be half as fast in multithreaded, if its only HALF of the i7... If you care even a bit about performance in a laptop, i7 is a no brainer, and sadly Intel is pushing the prices up, and replacing the middle class with the U processors which will sadly ruin everyone's dreams about reasonably priced PC performance in the future. This leaves an opportunity for AMD, as soon a quad i7 will be unavailable in any laptop under 1000 dollars...
"I wonder who are these people who'll pay $276-366 for something that'll get totally trashed by a $64 G3260 + $2-300 discrete graphics card."
The same people who don't treat gaming benchmarks as the sole reason for their purchase. Once you get out of gamer culture, you'll find people value (whether mistakenly or not) CPU power over GPU power. In the corporate IT world, installing that $100 discrete GPU card can cost more than the hardware alone.
What is ‘good enough’ performance though? I put a G3220 pentium in my wife’s computer and its plenty good enough for everything she does, which I suspect represents 99.8% of the population (Microsoft office stuff, email, facebook and Netflix/Amazon prime). Heck I can even play DOTA2 on it reasonably well.
Intel has had ‘good enough’ graphics for damn near everyone for a while now and that G3220 was like $55 on sale. To me ‘good enough’ graphics isn’t playing modern games at high resolution and quick frame rates. If you want an iGPU to do that, you should expect to pay for it since you’re in the <0.1% of the population that cares.
On the CPU itself: an IVB repeat: very minor CPU improvements, large GPU upgrade...
Oh, btw, since the CPUs are unlocked, can we get some identical speed benchmarks? Would be the nicest, easiest way to track IPC and eDRAM improvements.
I know. I feel that they should have pushed an early article out explaining the issues and published the review later on in one piece. You don't give good firmware, you get later coverage.
Wow so anyone who buys one of these chips is spending half their money on a useless GPU that cant even beat a lowly R7 250. That is $120 to $180 totally wasted on GPU, which occupies half the die. Talk about a massive intel tax. What happens if intel only offer a K version that contains half the die wasted by this useless GPU? How many people are going to just suck it up and buy it even though half the chip will never be used because they will be running a real graphics card?
These chips really only make sense for high end laptops like the 2015 MacBook Pro - which ironically doesn't use them. It boggles my mind that Intel is shipping so many transistors that go completely unused. It's the antithesis of Moore's Law - Intel silicon is HALF-USELESS.
That's because intel only cares about mobile now, this stuff isn't made for us its hacked to work for desktop users this stuff is all about mobile. Personally I deal with largely because I am just happy that people who buy stuff like macbooks can now actually have a chance of running boot camp and playing games. In the mobile work igpus have always been a big part of the scene. Also the better intel does with integrated graphics the more they are able to kill AMD/NVidia which is what they really want to do, slowly and steadily eat the bottom end of the GPU market out from under them. It used to be that ANY discreet graphics on a laptop was WAYYYYY better than integrated. But after intels 2nd gen core series the bottom X1XX and X2XX gpus seemed to not make any sense, and intel has been getting better to the point now that X4XXX gpus are starting to not make sense. This screws graphics makers into only being able to sell higher end X5XX + GPUs and they destroys their bread and butter money.
As much as I hate to say it, I agree. While their new iGPU is beating AMDs iGPU, the place where such iGPUs make sense is in small HTPC scenarios most of all (apart from budget gaming laptops which have a completely different thermal restriction). The kicker is though that the pricing is far too high for even being considered for what amounts to a media playback machine. It's thermally too hot for a laptop scenario.
If they had paired up the iGPU with a G3258 CPU core set and the Crystalwell DRAM, and priced it near AMD's offerings, THAT would be a very compelling product.
Actually, the GPU takes up over 60% of the die space on the first chips, there is a 2nd piece of silicon comprising the EDRAM that take up a not-so-insignificant piece of real estate on the chip.
The situation with AMD APUs are similar, about 40-45% of their die space is GPU. When they go HBM, they will in a similar situation to Intel, and they'll need to charge much higher price to make up for the tech.
If you want lowest cost/value for CPU, get a Pentium, particularly the anniversary edition. They're cheap (I can get them for about $50) You can overclock the shit out of them and their IGPU only takes up about 40% of their die space. If you need more CPU power in the Socket 1150 format, get a Xeon E3 which has no IGPU...they are cheaper than Core I7, but they cost more than core I5.
Not really sure how appealing this will be for anyone on LGA1150 for the desktop, given Skylake is just around the corner. Certainly more appealing to heavy duty laptops, maybe NUCs for the better GPU capabilities but the prices are too high compared to low-end CPU + dGPU options (Alienware Alpha at $400-500 comes to mind).
Ian, buddy, you really need to step up your game when it comes to analyzing power, temperature, and noise. Seriously, Anandtech used to be a place where you could read a review on a product and have all the information you needed about it and now once i'm done reading an Anandtech review i have to look elsewhere to get the full story.
Old Anandtech: Comprehensive and comprehensible. New Anandtech: Comprehensible only because the reviews have become utterly incomprehensive.
So the actual CPU part takes up less than half the die. My forehead cannot take much more of this, there are just to many facepalms these days. And this is what they want us to pay $276 for - a CPU that would take up much less than 100 mm2 and should cost $100. God, please, let Zen be a good CPU, please. I will pray every day, I want Haswell to be my last Intel CPU for a long time. I would just like to point out, than an i3 with a 750 Ti will destroy this APU, offering PS4 performance in every single game, for pretty much the same cost.
Don't expect Zen to be a some "cheap chip" AMD has stated they're going to focus on performance rather than cost. I'm expecting Zen with HBM to cost as much as comparable Intel offerings.
Rather than complaining about the cost of new cutting edge hardware, put your money where your mouth is and get the i3 + 750TI. I have one here and it serves well as a secondary machine (as well as a doorstop). I'd much rather use my 2500K + 960...
I'd really love to see benchmarks of Civilization V on this thing. With such a CPU-intensive game, it'd be interesting to see how much the L4 cache makes an impact, not just with integrated graphics, but also when using dedicated graphics, to see how much the L4 cache helps the raw CPU performance in a game that is so easily CPU-constrained...
I think Civ 5 is still single thread or maybe dual thread and have to process everything in order so each term still take minutes. I have the game at launch and all packs & DLCs and it doesn't stress my overclocked 3770K a bit. No core hits above 40% yet a term still takes forever. The game isn't 64-bit either so there is also that.
Completely worthless release for gamers and PC enthusiasts...another year goes by with no reason to upgrade to a new CPU. Were officially back to the dark days before AMD kicked Intel in the nuts with the Athlon64 and made them have to compete.
Skylake is coming in 2 months for gamers. Today's release is for system builders that needs a stopgap. You shouldnt buy into AMD's PR campaign that you need to buy AMD to support innovation. Intel is doing that all by itself.
I agree 2500K and 2600K here haven't had a single reason to even be tempted to upgrade. The only good that has come of this is that now days I have gobs of cash to spend on GPUs, I guess this explains why NVidia can charge $999 for a GPU now and sell out.
Sad that Krzanich continues with this strategy of making products nobody needs and abusing their monopoly to charge way too much. Seriously, just 4 cores and a GPU that can't even do 1080p at those prices? Same die they could have fitted 12 cores and no GPU but we'll never get that because Intel has no interest in making good chips and regulators are all dead. They'll choke on it, it's unavoidable.
I only see this being good in all in ones and mini PCs. I fully expect Apple to announce a broadwell refresh for their iMacs before the year ends.
Too bad, because I'm looking to upgrade from an i3-2120 to a low-ish power i5, and this fits the bill except that the integrated graphics drive the price through the roof. I'm hoping for a 65w vanilla i5 that performs well, and this is close, but not quite right.
I somehow doubt it. They already refreshed the iMac a bit and dropped the prices for the year. They doesn't update as often as everyone else, they're happy to wait.
AMD cannot compete. They are so far behind Intel on R&D and even vision and theories it has no chance of catching up to Intel any time soon. The CPU department is kept afloat by the GPU department and that isn't going well either.
AMD doesn't have the money. The main reason it's behind the curve is because they couldn't exactly bring out something to replace Bulldozer and as such had to milk it for all it's worth so it wasn't a complete waste of time that could'v dragged them under. If AMD had the resources, I think we'd have seen a replacement for Bulldozer by now.
AMD's GPU tech is hampered by the process node and the lack of eDRAM/HBM. Carrizo may make a large difference in terms of power consumption plus a lesser difference in terms of performance, but AMD stands to benefit a lot from throwing some memory on die. Coupled with better compression, they'll have a more competitive product, but I think Iris Pro 6200 has the leader for the next 12-18 months (even if Carrizo does come to the desktop, will its performance be that much improved over Kaveri? I'm not so sure).
I think we need high quality tests for the iGPUs to see how that makes a difference. Intel may lead by even more.
Well I see no reason to upgrade yet maybe when skylake drops. My sandy bridge(i 5 2500k) will go for another generation. The cpu runs overclocked to 4.7 ghz at this I could save boat load of power on the system.
This review is also bunk due to the fact that they are usign a DDR3 card when even the same card with GDDR5 is way faster and only marginally more expensive. No one in their right mind even buys a gpu with DDR3 onboard.
And AMD market share on CPU and GPU have been going from what to what in those 5 years? Kinda like how Bulldozer was the first "true" quad-core but managed to run at 60% of the speed of "fake quad" Q6600 and being hot as hell.
Technically Intel did graphics on-die first, they did it back in January 2010 with HD Graphics on the Clarkdale and Arrandale. It was in June 2011, when AMD released their first APU.
Hey whats the whole "Thank you A for X Thank you B for Y"...huh, whats goin on AT? Dare i say sarcasm towards manufacturer's being cry babies for not thanking them enough.
And speaking of manufacturer's...AMD its high time u pull a miracle (zen) and 'bring it, I mean the monopoly from the blue team is just ridiculous...for god sake AMD plz. Also ARM and Samsung ur welcome to join the desktop x86 party anytime..and the sooner the better.
Meanwhile i'll be rockin my delidded 3570k till skylake is out
Intel is ridiculous in what way? You know Skylake is out in 2 months and that's why you are waiting for it. If you think Skylake isn't going to deliver why bother to wait? Intel haven't failed to deliver since Pentium M unlike AMD which is just one dropped ball after another. Sit tight and is gamers will have Skylake in no time. Today's release is for system builders that need a stopgap product, not us.
First time in years I am not sure what to think about this new cpu... Is it good? Is it delivering? On some things it quite rocks, on other seems like unfinished product/partial release... guess following months will tell...
Just wondering: As previous Iris Pro vs. other integrated graphics reviews have shown Iris Pro to perform substantially worse at higher resolutions, are more tests above 720p coming in part 2? Or are you just going for whatever setting gets reasonably playable frame rates?
Why the frack didn't you test a midrange card at 720p so we can compare it to Iris Pro? They can't even do 30 fps at 1080p on most of the games you tested. What were you thinking?!
On the low end gaming benchmarks for GRID, when benchmarking the r7 240 the minimum FPS for the A10-7700k is higher than the average FPS which doesn't make sense.
Also wish they had tested dual graphics with a r7 240. The A10 ~ 130 + r7 240 ~ 70 = 200. That way we could compare a 200 dollar APU + GPU vs Intels new 200 dollar cpu's
I've tried it with an A8 7650K and I can tell you that the frame-pacing is horrible, and the variance in frame-rate is stupendous. Crysis 3, 720p, low - 60fps looking in one direction, spin around on the spot and you're looking at 20-30fps looking in another direction.
Wow Intel wont even let AMD keep the better iGPU performance crown. At least APU's had higher gaming performance going for them. Now they really have nothing. Plus this part uses less power and runs cooler than amd's igpu. Intel's engineering and process advantage is really showing. If I was building an HTPC this chip would be my go to. Can actually play any game with decent settings on it. Turn ur HTPC into a console PC gaming machine as well.
Couldn't have put it better myself. APUs were AMD's last card, and Intel just took that card out vof AMD's hand. If Zen isn't the second coming that's been promised (and I doubt it because of AMD's marketing track record), then AMD's CPU division is effectively dead.
That's what Anandtech would have people believe. In reality the FX chips like the $100 8320E (I paid $133.75 with an 8 phase power motherboard from Microcenter) is very competitive for things like h.265 encoding, Blender, and so on with Intel -- especially on performance per dollar.
So, Anandtech sticks 6 APUs in its charts and not a single 8 thread FX chip.
I love it that you have a Linux section and I was even pleasantly surprised to see Redis benchmarks, which I use a lot! For my purposes I'm investigating using: 1. MAXIMUS VII IMPACT MINI-ITX board (because it's the only ITX on the market afaik that supports M.2 x4. Other ITX boards seem to only support M.2 x2 although they often don't make it clear.) 2. Samsung XP941 128GB 3. M350 ITX case 4. Fedora 22 OS
"due to time constraint we will save overclocking for part 2" care to explain why both you and tomshardware don't do OC? i have a feeling intel not allowing reviewers to do OC at first for sample, until at least few weeks or even months later.
can't blame you though, need to continuously get sample from intel.
I've got to say I like what Intel has done to its iGPU side of the APU no matter how you read this review it actually doesn't look good for them. Overall Broadwell from these results seems to be between 1.5-3 times more expensive than AMD and if that doesn't both you, then its performance in the games above was between 1.6-6 points ahead while in one game beating AMD by 10 points (not good) while AMD beating Intel by 10 fps (nice). And don't forget we are comparing AMD (28nm) against Intel (14nm) which casts a not so flattering picture of Intel. And need I mention the latency gain over AMD, hardly to write home about by Intel. If your an all round user or a gamer the only thing you'll notice from Broadwell is the hole in your wallet.
true that, most people here are for CPU performance though, doesn't hurt to have additional good IGPU in case of discrete dies on you, can still boot without dGPU and iGPU can still do things at 1080p.
Interesting. I could see a particular niche for this for users needing very good cpu performance and wants occasional gaming that is better than an entry level graphics card. Can't wait for overclocking results.
i've think AMD products best when used Mantle API. it can boost up to 30% than DirectX API. Otherwise 10% parts of AMD APU had TrueAudio DSP Processor. so it is not 100% CPU and GPU
8320E with 1600 8-8-8-24 RAM at 1585 due to FSB, Cinebench 15 multithread
one core per module, 4.28 GHz, 399 one core per module, 5.02 GHz, 467 3.33 GHz, 540 one module disabled, 5 GHz, 590 3.8 GHz, 608 Intel i7-3770, 3.4 GHz, 662 4.17 GHz, 664 4.2 GHz, 675 4.28 GHz, 683 4.41 GHz, 703 4.56 GHz, 724 4.68 GHz, 743 4.78 GHz, 765 Intel 4770K at 4.4 GHz, 822
Clearly, an overclocked Intel will beat this chip in this test but if you have an Intel that can't be overclocked that may be a different story, depending on the stock clockspeed of the chip. Also, Cinebench is, as far as I know, a test that favors Intel since it relies heavily on FPU. None of the APUs are close to FX's performance so the lack of even one 8 thread FX in the charts is not good.
The single thread performance of FX is particularly weak but it can be improved quite a bit by setting the chip to 1 core per module because high clocks can be achieved with much less heat than when the full 8 threads are enabled. The downside is that multithread performance suffers (in tests like Cinebench that can use all 8 threads... perhaps not so much in programs that max out around 4).
Anand, i'm going to heavly critizice your linux testing method.
you go to great lengths to adjust and compensate for turbo modes, but on an ubuntu 11.04 it should completely ignore turbo modes since sandy bridge or does not even know what turbAdvances in the linux kernelo modes are, severely limiting both new intel and amd cpu. Also, the power profile is changed alongside the new intel pstate driver, that greatly improved performance under linux in respect to the past; also, new kernels are optimized for the eDram on the crystal well cpus. I suggest you start moving the whole testing platform to something more current or simply redo tests once a year with an updated kernel. You could use the phoromatic platform to automatize the procedure.
So clearly Intel has now topped AMD's one remaining trump card--integrated graphics performance. Of course you're paying over 2x the cost for the privilege. One might argue that you're still getting a much better CPU at the same time, which is true, but most people just look at the absolute cost and the fact that the A10 is not all that bad a CPU.
I think this becomes *much* more interesting if/when Intel starts producing *i3* CPUs with Iris Pro. Then it's really game over for AMD, unless Zen is something spectacular to allow them to catch up (or even get close) in CPU performance. AMD could also easily up the SPs in their APUs to something like ~1024 which would give them R7 265/PS4/GTX 750 Ti -class performance. The only problem is AMD hits another hitch there as they don't have the thermal/power envelope to do so given A10s can already throttle the CPU side when the GPU is in heavy use, in order to stay within TDP limits. AMD's lack of power and thermal efficiency with their GPUs (as competent as they are otherwise) also poses a serious problem in putting the same tech on an APU die.
Seems like AMD has to hit a homerun with both Zen for performance and their GPUs for efficiency in the coming year. However from everything we know about Fiji and the fact that the rest of the [GPU] lineup will be re-brands, it doesn't seem likely. Perhaps it's time to quit the CPU business, lol. Either that or release APUs with high-end graphics, boxed with CLC/AIO liquid coolers (or massive air coolers).
I know it's a dead socket, but could we AM3+ users request a CPU be thrown in the mix?
I keep reading, keep looking, the reviews look impressive, so I check the Anandtech Bench and my 6300 is still faster than the APU's.
In all honesty, that has to hurt AMD. Those of us that went with the "big boy socket" are still faster overall even though the APU has made big leaps in IPC and clock speed.
Fact is we're still out here and the only way to get faster across the board is to jump Intel.
Well keep in mind your FX 6300 has two more cores (or one more module) and it has 8MB L3 compared to no L3 at all on the APUs. If they made a "6 core" APU with L3 on Kaveri/Godavari it would be faster.. at least until it starts hitting the TDP limits, lol. This of course is probably why there are no 6 or 8 CPU-core APUs--because they're already having problems with keeping TDP within limits with a 512SP GPU and 4-core CPU.
"I know it's a dead socket, but could we AM3+ users request a CPU be thrown in the mix?"
It might mess up the charts, which clearly make it look like you need to buy an Intel CPU rather than a $100 8320E with a motherboard discounted $40 (Microcenter).
I think an i7 with an Iris Pro 6200 is just a waste of die space, considering that you should be using a discrete card. An i5 should be fine, but an i3 w/ 6200 would be great at HTPC and light media scenarios.
Doubt they're gonna release another Broadwell though. Hopefully i3 Skylake get the Iris Pro treatment.
Also I don't see why people don't want integrated graphics on high end cpus. I am under the impression that the integrated graphics is able to work with the discrete graphics card with dx12, adding some fps at least vs a cpu with no integrated graphics?
In the 2nd page, the first word of the paragraph above the "Broadwell-K, or is it Broadwell-C, or Broadwell-H?" header, "Ultiamtely", should be "Ultimately".
So over a month later...this seems like it was quite the paper launch. Not only have we not seen "part 2" of this review (with OC results), it doesn't seem like you can actually *buy* one of the CPUs anywhere.
Best I seem to have found is Amazon but they show it's not even going to be in stock until mid-August! Won't Skylake be around by then? Intel really doesn't care about Broadwell I guess--you have to wonder why they even bothered.
I agree, the obvious thing to do is use the price as it is during the time the article was written. For Intel i7-4790k reviews visit <a href=” http://www.xiontech.net/”> here</a>
So I got the A8-7600 for $93 including shipping and 25% sales tax for the kids. The i5-5675C is $348 with shipping and the same sales tax. With the i5 the gaind would be 7-9 fps in GTA V, 6fps in GRID Autosport, and about 0% in Battlefield4 (A8-7600 34fps, 1920x1080, medium detail, with Mantle 14.6), for an additional $255, that is.
Well, who would've known at that time of the review (year 2015) that this slow to bring-up process, that is 14nm will be in service so many years and still is today, year 2020. I think Intel should've reconsidered their plans for 10nm, after seeing how slow 14nm ramped, but probably management was somewhat communistic with the engineers.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
196 Comments
Back to Article
nandnandnand - Tuesday, June 2, 2015 - link
A10-7870K is listed as $137 and A10-7850K as $173Ian Cutress - Tuesday, June 2, 2015 - link
$137 is the launch price of the A10-7870K.$173 was the launch price of the A10-7850K.
We mentioned why we do launch pricing in our graphs in previous reviews, but it comes down to our graphs not being dynamically linked to a retailer and we have to pick a point that's suitable over time. Launch pricing does that, even though there might be future discounts over time.
nandnandnand - Tuesday, June 2, 2015 - link
Ok, that makes sense.Business opportunity: add a drop down to switch from launch price to "live price" for Newegg/Amazon/etc., hyperlink live prices, get a cut from every click.
AS118 - Saturday, June 6, 2015 - link
I agree with nand. Other sites do this, and it's helpful to me because I often read reviews to make a purchase anyway, and it helps me see what the price is right now.FlushedBubblyJock - Sunday, June 14, 2015 - link
Take a look at how much the GTX980 stomps the amd 290X above - in the review here where they aren't concerned and paying attention and picking the best games for amd gimpy hardware.Just look at the FPS difference... let it sin in - the reviewers haphazardly reveal the truth when they are not intending to.
bloodroses75 - Wednesday, June 17, 2015 - link
Oh look, a current gen video card beat a last gen card that is just over half the price.You may want to try waiting until the 300 series is released/ bench marked before spouting how superior one is over the other. If the 300 series ends up being a dud (which it kinda looks like it will), so be it; at least it will be an 'apple to apple' comparison.
MrSpadge - Tuesday, June 2, 2015 - link
You don't need dynamically updated prices. simply pick the prices around the release date of the new hardware. If the price of a 1 or 2 years old comparison chip was lowered significantly (yes, this still happens sometimes), comparing it based on launch price is misleading and will always make some people shout "unfair". You easily fix this.taltamir - Tuesday, June 9, 2015 - link
I agree, the obvious thing to do is use the price as it is during the time the article was written.That means launch price for the item being reviewed and current prices for all items it is being compared to
ImSpartacus - Tuesday, June 2, 2015 - link
Since you have to do a "snapshot" pricing for these reviews, you may want to consider looking at average pricing at the time of review. At least then the time frame for each snap shot is the same.nathanddrews - Tuesday, June 2, 2015 - link
There's a fine line between "just enough" information and "too much". Prices for CPUs vary greatly depending on where you buy them.MrSpadge - Tuesday, June 2, 2015 - link
Simply use something representative of the current situation. Not old numbers which may be totally meaningless today. In Europe we have simple price comparison engines like geizhals.eu where it's really easy to find typical prices. Just exclude obvious outliers and cheap products not yet deliverable and voila, you've got the current street price.nathanddrews - Tuesday, June 2, 2015 - link
In Europe? You mean on the Internet? LOLThere are tons of price comparison websites, but why should AT take the time and be responsible for tracking down pricing when we can all do that in our own areas or from our favorite retailers? They don't know where I like to shop or what currency I use, so it seems like a waste of their time. I'd much rather get quality performance analyses on a more regular basis than have them worry about updating price indexes. Who knows? Maybe their new parent company can do that for them off to the side or something?
MrSpadge - Wednesday, June 3, 2015 - link
I never see people from the US referring to them. They seem to just use Newegg or Amazon. And getting prices for European shops won't do AT any good, as they're mainly writing for the US.And AT has to look up the launch prices of the hardware anyway when they write a review. How much more time would it take to instead look up the current price in a comparison engine? Again, I'm not talking about updating prices dynamically.
I they're not doing this you always get comments like "who in their right mind would buy x? Vendor y must be totally stupid to keep offering this". Simply because the launch price is far higher than the current price.
Senpuu - Wednesday, June 3, 2015 - link
Why should AT take the time, you asked? Well, why should they list a price at all then? Hell, why provide any information to the reader beyond specs and performance metrics? Why even talk about the market at all? There is obviously value to be had in doing so and it comes down to where you draw the line. Whereas you would rather they not 'waste their time', I would rather not waste mine by having to question whether a product is being valued near current market price or some esoteric launch price, and then being forced to google multiple things myself to get a feel for 'real world' prices. No, the price they come to will likely not tell me the exact total of the item in question at whatever preferred retailer a reader happens to use, but it will probably be within 5-10% of that value, which is highly useful for at-a-glance comparisons and understanding where something sits in a market segment.We're not talking about a large amount of effort here, but I'd rather AT do it than myself; it's less effort for them and it benefits a large portion of their readers. Simply provide a disclaimer about the methods used to gather the data as well as a liability warning, and everyone's happy.
Thud2 - Saturday, June 13, 2015 - link
Very good point.yannigr2 - Wednesday, June 3, 2015 - link
There are official price cuts and so you have new OFFICIAL pricing that it is fair to put in the charts. If the launch price of 7850K was $500 and the last official price was $100, wouldn't be the above charts completely misleading? People do have a price/performance ratio in their minds when looking at these charts. In fact for 90% that's the biggest parameter when looking at those charts. And those charts LIE exactly there.manifestation88method - Sunday, September 27, 2020 - link
Nice post https://discoversoulpath.com/yannigr2 - Wednesday, June 3, 2015 - link
You can always take the last OFFICIAL price, but nevermind. It's AMD's pricing that it is wrong anyway. Who cares about poor poor AMD?Gothmoth - Tuesday, June 2, 2015 - link
intel is sitting on it´s laurels because there is no competition.AMD is no competition for poweruser.
users want broadwell..mhm.. aha....
i am the only one who wants intel to make a fast 6 (or better 8) core system without integrated graphics, that runs not to hot and does not cost a fortune?
can´t we have such a CPU for 500-600 euro?
ok then i stick to my OC ivy bridge for a few more years.. and you intel can complain about declining sales.
MrSpadge - Tuesday, June 2, 2015 - link
That's exactly what i7-5820K is for. If it consumes too much power for your taste - well, that's simply the price you have to pay for more cores.Flunk - Tuesday, June 2, 2015 - link
The 5820K is even available for < €400.ImSpartacus - Tuesday, June 2, 2015 - link
And it's not THAT expensive. It's certainly not cheap, but it's reasonable considering Intel's dominance.bug77 - Tuesday, June 2, 2015 - link
I would love to see a quad core clocked higher (or with better IPC). I don't use highly parallel software that much and I have no use for an integrated GPU.Sadly, with no competition, Intel has just been pumping the GPU for 4 generations in a row. Not that I blame them.
Taneli - Tuesday, June 2, 2015 - link
Improving single threaded performance is extremely hard, just ask AMD. I'd expect improvements of 5-10% per generation for the next few years.bug77 - Tuesday, June 2, 2015 - link
They could still raise the clock speed. Intel CPUs could hit 4GHz on air for years. Even the top-level Haswell can do it, but it's 4 core+HT part. If I could replace my i5-2500k with a 4GHz quad-core, that would be good enough for me. Removing the integrated GPU from the equation would yield even more thermal legroom for the CPU. But it's not happening. With CPU performance securely in their hands, Intel is trying to secure positions in GPU and mobile markets (they'd be crazy not to at least try to diversify).Yet raising the clock speed conflicts with higher IPC. Because raising clock speed needs a longer pipeline and a longer pipeline means taking a more serious hit for branch mispredictions. AMD has managed to seriously raise IPC with AthlonXP. Intel did it with their Core architecture. And nothing happened ever since. Because there's no more pressure on Intel and AMD doesn't have the cash to invest anymore.
swaaye - Tuesday, June 2, 2015 - link
You don't know that the GPU is impeding clock rate.Increases to clock rate increase power consumption dramatically. You can always overclock Broadwell yourself.
IPC has been steadily improving. Core 2, and anything from AMD, are far behind at this point.
bug77 - Tuesday, June 2, 2015 - link
"IPC has been steadily improving. Core 2, and anything from AMD, are far behind at this point."Not really. If you look here: http://www.anandtech.com/show/7003/the-haswell-rev...
You'll see there's usually less than 10% gained (watch the i7 3.5GHz parts). And that's spread over three generations. Broadwell bring another 2-3%.
And yes, AMD has been playing the same game, only they're stuck in their Athlon64 days.
MrSpadge - Tuesday, June 2, 2015 - link
Dude, Intel is offering exactly this with i7 4790K. A really highly clocked quad core. Use your external GPU and the iGPU won't consume any power nor limit clock speed. In fact it makes the die larger and thus helps cooling a bit.sonicmerlin - Tuesday, June 2, 2015 - link
Oh please. What a garbage excuse. "It's hard"? Intel's more than doubled IPC/Watt in their mobile chips over the last 3-4 years. They just don't care about high end IPC because they have no competition.vision33r - Tuesday, June 2, 2015 - link
That is what the XEON offers a better price and value than the X58xx line if you don't need unlocked performance. You can buy many XEON 6 cores cheaper than the X58xx line.Pcorb - Tuesday, June 2, 2015 - link
I'm not so sure that high end desktop users are a large enough demographic that Intel will be complaining any time soon.yannigr2 - Wednesday, June 3, 2015 - link
Yes I am shouting that for a month now, but of course I was treated as an AMD brainless fanboy when pointing at that, so I must be wrong. Now we have a chart that it is misleading and hilarious at the same time. If Ian had done what is right and logical, those charts would have been informative, correct and fair. I think AMD created 7870K just to troll Ian's charts.Hulk - Tuesday, June 2, 2015 - link
It would be really nice if you would note the frequency each processor is running during each test. With all the turbos these days it's hard to know and therefore hard to make IPC comparisons.Mr Perfect - Tuesday, June 2, 2015 - link
Yes, please. Once upon a time, I would have had the clocks for all the models memorized, but without that comparing IPC is difficult to impossible without the clocks noted.MrSpadge - Tuesday, June 2, 2015 - link
+1Lonyo - Tuesday, June 2, 2015 - link
You state in the opening that if you upgrade on a 3 year cycle, you would be coming from SandyBridge. Would it not make sense to have some older Intel processors in the graphs?Ian Cutress - Tuesday, June 2, 2015 - link
With a recent new testing suite, we haven't gone back through enough generations yet with the new benchmarking scheme. You can still check legacy benchmarks (Cinebench 10 / POV-Ray) between old and new in Bench. www.anandtech.com/benchHulk - Tuesday, June 2, 2015 - link
Yes. One thing Anand did amazingly well was review the new Intel processors. Clock-normalized IPC comparisons from previous generations is really what we want to see. As well as normalized power consumption, ie energy used completing the same workload.mgilbert - Tuesday, June 2, 2015 - link
As a normal home PC user/gamer, I couldn't care less about small differences in power consumption. Sure, 100 Watts might matter, but what matters most to me is outright performance.Martin84a - Tuesday, June 2, 2015 - link
As a normal PC user/gamer, that's all I care about. Less heat and thus less noise. Having a passively cooled PC that never goes above 65C under load is pure bliss.mgilbert - Tuesday, June 2, 2015 - link
Less heat and less noise is nice. More FPS is better. When someone can build a silent system that can keep up with an i7 processor and GTX 970 video card, let me know. In the meantime, I won't compromise. Some fan noise is a small price to pay for a more immersive and detailed gaming experience. Just put on your headphones.Gigaplex - Tuesday, June 2, 2015 - link
Why i7? The i5 is just as good when it comes to games, since hyperthreading doesn't do much for games.hero4hire - Thursday, June 4, 2015 - link
We call those laptops. Passively cooled? That's normal htpc and abnormal niche PC user/gamerRefuge - Tuesday, June 2, 2015 - link
100 watts is less than $80 usually a year if it is only when you are gaming. Just say no to ordering lunch like 3 days a year and you are good. :)FlushedBubblyJock - Friday, June 12, 2015 - link
When AMD inefficiency costs an $80 bill, skip lunch it's all good...When an Intel or nVidia product is $8 more let alone $80, that settles the whole matter completely in AMD's favor, and proves once again AMD is the best bang for the buck....
That's how AMD fanboys play it.
I guess despite all the sickening propaganda of the amd fans, no one is listening nor buying it.
AMD is dying and nearly dead, market share and share prices...
What the AMD fanboys forgot is no one else likes being lied to, nor told what to do.
Hulk - Tuesday, June 2, 2015 - link
I don't care about small differences in power consumption for home use either. But I do like the info to compare nodes.Iketh - Tuesday, June 2, 2015 - link
as a gamer in florida, I care very much... gaming in the summer the heat produced is enormous and the central A/C is running overtime trying to keep temps downDPUser - Thursday, June 4, 2015 - link
Go Solar!Iketh - Tuesday, June 2, 2015 - link
+1Novacius - Tuesday, June 2, 2015 - link
They could put 8 cores in there instead of that GPU.Gothmoth - Tuesday, June 2, 2015 - link
well we can dream.i could use more cores.
i use heavy multithreaded applications and do heavy multitasking.
yet i have to live with 10% better performance per cpu generation. :(
haswell-e is the only choice when i want to upgrade.
but it runs hot and i have no use at all for internal graphics.
so why not making another CPU tailored for people like us?
i mean intel makes enough different CPUs anyway.
Gothmoth - Tuesday, June 2, 2015 - link
ah that reads wrong... i know that haswell-e has no internal graphics.what i meant was... haswell-e runs hot and for the other cpus like broadwell i have no use of the internall GPU.
DCide - Tuesday, June 2, 2015 - link
All this wishful thinking on your part tells me you probably don't have a very strong, actual NEED. Because if you did, you'd be ecstatic about the 5820K or 5960X. They run at 4GHz all day long with standard air cooling and no knowledge of overclocking (I just use ASRock's SLOWEST default overclock settings in the BIOS setup).It's almost humorous when I read these benchmarks, because they so understate the true, completely stable performance of the 5960X without doing any extra work setting it up. It's nearly twice as fast as the 4790K in practice (in fact it *is* about twice as fast when you consider how easily and quickly the 4790K throttles in a typical configuration).
I realize there are situations where generating more heat in the room matters. But if your multitasking were genuinely heavy enough to warrant the upgrade to 6 or 8 cores, you'd be foolish not to take advantage of the great solution that's available right now.
MrSpadge - Tuesday, June 2, 2015 - link
Double the cores means double the power consuption. There's no way around this, except making each core run slower.. which diminishes the benefit of having them. If Haswell-E runs too hot for you, no other chip Intel would reasonably want to produce right now would satisfy you.nikaldro - Tuesday, June 2, 2015 - link
But haswell-E is still on 22nm, not 14nm.nevcairiel - Tuesday, June 2, 2015 - link
Then wait for Broadwell-E.Azurael - Tuesday, June 2, 2015 - link
Why would you want to invest in 3 sticks of DDR4 RAM which will bring nearly no performance benefit and a massively overpriced motherboard for an extra couple of cores? It's not the cost of the Haswell-Es, or even their power consumption that bugs me - I suspect dual channel DDR3 is perfectly adequate for 6 cores alone given that it is apparently adequate for 4+GPU.Still, it makes my investment in a 2500K which has been running at 4.5GHz on stock voltage for the last 3 and a half years on the cheapest Z68 board I could buy sound rather good. I think I'd struggle to gain more than 20% in performance for the outlay of buying a new motherboard and CPU. I built a 4770K-based rig for a friend last year, it won't scrape past 4.2GHz. I'm sure IPC + HT + Faster RAM makes it faster than my rig, but certainly not in any noticeable fashion.
DCide - Tuesday, June 2, 2015 - link
Obviously it wouldn't matter to you. You didn't even bother to go with an i7.But for those to whom it does matter it's the biggest step up we've seen in a desktop CPU in a long time. The additional RAM and motherboard cost is trivial to those for whom the extra performance provides more than amusement.
Azurael - Tuesday, June 2, 2015 - link
You're correct - to me, it seems adequate for video editing and running Android builds... I'm sure in your 'elite' i7 world, things are far quicker. My issue is not with the fact that Intel offers higher spec or more expensive parts. Obviously, there would be no need for 18-core Xeons in this hypothetical world. My issue the fact that they artificially constrain the 'mid' range dual channel architecture to 4 cores and lump us with an IGP that's never going to get used taking up more die space than the extra couple of cores would just because they don't have any competition.Azurael - Tuesday, June 2, 2015 - link
And by the way, a ~84W TDP CPU is going to throttle when overclocked - especially when running AVX loads. If you haven't found the current limits in the BIOS, you probably deserve to believe your expensive 6-core running at only 4GHz is twice as fast as a properly set up 4790K :Pextide - Tuesday, June 2, 2015 - link
Have you ever overclocked a CPU before? Those limits are easily raised, and a properly O/C'd build will not throttle...DCide - Tuesday, June 2, 2015 - link
It's hardly elite - just live video. But if the CPU saturates or crashes, everybody will know.I'm not skilled at overclocking. But for this application, stability is paramount. I wouldn't want to push it close to the limit, only to have it crash 5 hours into the stream. When I tested 4-core CPUs I had to run at low resolutions, or else the video would stutter. And I even at that I couldn't use all the features of the application.
With the same low-profile case and the same heatsink/fan the 4790K would throttle in less than 30 minutes while the 5820K and 5960X never would. They were all running at 4GHz. I'm sure someone more skilled could make all 3 CPUs run faster. But the 8-core is still going to be about twice as fast as the 4-core.
On the topic of the new CPUs, I won't be surprised if they O/C well enough to make them dominate the previous generations. But they won't be a match for the 2011-3 CPUs.
vision33r - Tuesday, June 2, 2015 - link
If you're into server workloads, 8 cores would nice. For gaming, more cores only adds more heat and hardly any benefit to gaming performance. For gaming, quad-core is good enough with no integrated GPU. All this integrated GPU stuff is just a waste of resources. Most of us into performance and gaming don't want the extra heat and power consumption.FlushedBubblyJock - Sunday, June 14, 2015 - link
If the kiddies can play WoW on it, amd and intel are happy. They will never stop.Galatian - Tuesday, June 2, 2015 - link
Would like to see some Quicksync Benchmarks! Anand noted fast Quicksync performance on the Iris Pro 5200, so I sure would like to see a) the differences between GT3e (Haswell) and GT3e (Broadwell) and b) the differences between GT2 (Haswell) and GT3e (Broadwell).Refuge - Tuesday, June 2, 2015 - link
I think that is coming in part 2 judging by the first two pages of the article.Kevin G - Tuesday, June 2, 2015 - link
I'll second the request for QuickSync testing. I do a bit of video editing with Vegas and do some bulk conversions with Handbrake so I have a keen interest on this topic.dj_aris - Tuesday, June 2, 2015 - link
I'd like to see the 65w 4770R in the charts to measure the delta over a a generation (okay half a generation). Meanwhile, considering that i5-5675C is $276 but games much worse than a Haswell Pentium + a decent $200 discrete card I really don't see the point in traditional desktops. The only scenario I see fit would be in a tiny case using a super low profile thin mini-ITX board, so as to create a system similar to, say, an Alienware Alpha. But wait, there aren't any thin mini-ITX H or Z97 boards. Whereas back with Haswell we had thin mini boards but only non-Iris iGPUs. Oh Intel, there's always a part missing!DCide - Tuesday, June 2, 2015 - link
I think you mean you don't see the point for a gaming desktop. A Pentium + $200 dGPU would be a poor choice for most traditional desktops.Mech0z - Tuesday, June 2, 2015 - link
With DX12 supporting multiGPU much much better, could it be theorized that these will be very good for DX12 games together with a dGPU?jimbo2779 - Tuesday, June 2, 2015 - link
I could be wrong but I doubt the difference will be huge or even noticeable In most games and setups.Refuge - Tuesday, June 2, 2015 - link
I wouldn't purchase one of these with those intentions to be honest. They are DX11.5 not 12, and we've yet to see how well DX12 makes all the dGPU's and iGPU's play yet in the real world.But I also can't afford to be that early adopter anymore either.
XZerg - Tuesday, June 2, 2015 - link
it would be good to note the month each series of the cpus were launched as that would really tell the story better.MrSpadge - Tuesday, June 2, 2015 - link
Ian, a point for the OC review: Broadwell-C is listed as only supporting DDR3L-1600. You even underclock your memory for the stock review. What about higher memory speeds and voltages? Is it as painless as with older K series CPUs? The fat iGPU can certainly use more bandwidth despite having Crystal Well. And anyone profiting from Crystal Well as CPU cache could also use more bandwidth. Einstein@Home is a prime example for this.watzupken - Tuesday, June 2, 2015 - link
I feel Intel is creating way too many models with slight differences.MrSpadge - Tuesday, June 2, 2015 - link
I don't disagree, yet your comment seems oddly out of place under the review of 2 chips with features we have never seen combined before:14 nm Broadwell (energy efficient, better IPC than Haswell)
overclockable (the stock speeds are far too low, yet it already sometimes beats or ties the mighty i7 4790)
Crystal Well (it's going to rock in some applications)
twice as much GPU power than ever before in a socketed configuration (it's going to be a fine OpenCL 2.0 number cruncher for some use cases).
AtenRa - Tuesday, June 2, 2015 - link
At what settings did you run memory on the AMD APUs and why only 720p on the integrated Gaming benchmarks ???Novacius - Tuesday, June 2, 2015 - link
I'd like to see a comparison to Haswell's GT3e, too. Will there be one?CFTheDragon - Tuesday, June 2, 2015 - link
Anyone know a US Retailer with the i7-5775C in stock? I have everything else ready for my build, Motherboard, RAM, Gfx Card, etc. Just need the CPU and I have been patiently waiting for these.Refuge - Tuesday, June 2, 2015 - link
They shouldn't be available publicly until about the end of the month. But you may find some early ones if you keep an eye on the right channels.Taneli - Tuesday, June 2, 2015 - link
So how does it compare to 4770R?IUU - Tuesday, June 2, 2015 - link
Good , very good . Now after they have succeeded in this exercise, that is , showing they can build decent igpu, they can finally proceed to the real goal. Make a little bit weaker processors with the same igpu for real laptops, where portability but performance as well are needed at the same time.Then, throw away the burden of the i-gpu part for real desktop parts and complete the desktop series with 80 100 and 130 watts models.
Trying to persuade customers that from now on this is the high end is not going to work. The desktop
or desktop like derivatives should be at the front end of new computer challenges, none of which is
what we describe today as consumer needs(with the exception of gaming of course which admit it or not will always be one of the most demanding software).
Consider that pattern recognition(be it sound video or logic-language) will demand a lot of and parallel processing. Trying to push it as a network based service will not succeed at least as it is
envisioned right now. A significant part of it should be done locally. Now take this advice and conquer or abandon it and fail lol...
bill.rookard - Tuesday, June 2, 2015 - link
I'm not sure that they've proven they can build a decent iGPU. The whole point of integrated graphics is to provide an affordable solution with 'good enough' performance. While they have certainly hit the 'good enough' performance from both a CPU and GPU point of view, they've entirely missed the 'affordable' part of things. For the price of a single Intel CPU, you can almost get an AMD -system- which (while about 10% slower on average) is no small matter.Truth is (as pointed out) you can get an AMD chip with a $100 discrete graphics card which would blow either of the iGPUs away.
mrdude - Tuesday, June 2, 2015 - link
That's the issue both AMD and Intel have had with respect to their iGPUs. Once you've dedicated enough of the die to GPU transistors and skirted around the memory bottleneck (at least for Intel), you end up with a product that, while offering decent GPU performance, is horrendously overpriced for what it offers. Outside of niche scenarios, these things just don't make much sense given just how performance even cheap discrete GPUs can offer.I'd like to see what Intel's margins are on these parts and how much the eDRAM adds to the overall cost of the product. If yields are good and prices aren't much higher, I'd love to see Intel 'bite the bullet' in order to replace the entire product portfolio top-to-bottom. They'd benefit massively from it with respect to market share and being taken seriously with their gaming endeavor.
Unfortunately, I don't think Intel is willing to eat lower margins and a dip in profits for a long term gain, even if it would see a bucking of the current dip in sales with uptick in systems sold.
Refuge - Tuesday, June 2, 2015 - link
They have the best margins in the business.But of course they wont. Why lower prices, make less money now, kill AMD more, and then make more money later possibly if you don't lose out on some BS legal battle over having no compitition. Because it will happen, you know it will.
When instead you can continue to enjoy your large margins with minimal griping because the blame is being split between Intel's greed and AMD's inability to compete. Watch them die slowly, make more money while doing it. They control this playing field, why rock the boat when it is more profitable than it should be already?
Kjella - Tuesday, June 2, 2015 - link
And an Intel chip too, I wonder who are these people who'll pay $276-366 for something that'll get totally trashed by a $64 G3260 + $2-300 discrete graphics card. I guess the target market is AIOs that value style and low power consumption over performance and cost. If you look at their laptop prices, they're the same. So I'm guessing Intel didn't want manufacturers underclocking the 65W chips and using them as cheap 47W laptop chips. And the AIO makers wouldn't like Intel releasing a socketed chip cheaper than the identical BGA chip so the price is set with no grounding to reality in the desktop market.vision33r - Tuesday, June 2, 2015 - link
Majority of people can use G3260 + a decent vid card and have no clue it's a cheap underclass cpu. i5 and i7 are just marketing ploy for clueless people to buy the top level even though they don't know where the performance bottlenecks really are.Notmyusualid - Thursday, June 4, 2015 - link
You know, two of my brothers have i5 CPUs, and my i7 utterly trounces them, in every way. I was expecting the difference to be small, but oh no, for example, one brothers' 3230M does wPrime in < 19s, my i7 < 8s.This speed difference is reflected in the way the PCs 'feels' in every way. I'd say i7's are worth the money, despite seeing some i5s offer up great benchmarks online...
If gaming, then yes, you need an actual decent gpu, but for everything else, CPU counts most...
xulmar - Saturday, June 6, 2015 - link
Are you joking? 3230M? Obviously in laptops the story is different. In desktop the i5 and i7 are comparable, as the i7's main advantage is only the HT, but they still got 4 physical cores all the same. Laptop i5s are crippled processors. If intel would offer an i5 with 4 phsyical cores at a reasonable price - i'd buy it instantly. But 2+2 cores vs 4+4 in laptop i7 is hella difference. Obviously the processor can only be half as fast in multithreaded, if its only HALF of the i7... If you care even a bit about performance in a laptop, i7 is a no brainer, and sadly Intel is pushing the prices up, and replacing the middle class with the U processors which will sadly ruin everyone's dreams about reasonably priced PC performance in the future. This leaves an opportunity for AMD, as soon a quad i7 will be unavailable in any laptop under 1000 dollars...Namisecond - Wednesday, June 3, 2015 - link
"I wonder who are these people who'll pay $276-366 for something that'll get totally trashed by a $64 G3260 + $2-300 discrete graphics card."The same people who don't treat gaming benchmarks as the sole reason for their purchase. Once you get out of gamer culture, you'll find people value (whether mistakenly or not) CPU power over GPU power. In the corporate IT world, installing that $100 discrete GPU card can cost more than the hardware alone.
ppi - Wednesday, June 3, 2015 - link
Can G3260 run the latest games that require quad core CPU?wallysb01 - Tuesday, June 2, 2015 - link
What is ‘good enough’ performance though? I put a G3220 pentium in my wife’s computer and its plenty good enough for everything she does, which I suspect represents 99.8% of the population (Microsoft office stuff, email, facebook and Netflix/Amazon prime). Heck I can even play DOTA2 on it reasonably well.Intel has had ‘good enough’ graphics for damn near everyone for a while now and that G3220 was like $55 on sale. To me ‘good enough’ graphics isn’t playing modern games at high resolution and quick frame rates. If you want an iGPU to do that, you should expect to pay for it since you’re in the <0.1% of the population that cares.
Namisecond - Wednesday, June 3, 2015 - link
"Truth is (as pointed out) you can get an AMD chip with a $100 discrete graphics card which would blow either of the iGPUs away."Or you can compare that AMD chip and DGPU combo with an even cheaper Intel Pentium and same DGPU combo... :)
The_Assimilator - Wednesday, June 3, 2015 - link
Give it a generation or two - the cost of eDRAM will fall dramatically once Intel starts shipping it in millions of CPUs.Refuge - Tuesday, June 2, 2015 - link
They have proven they can build it, but at that price it is a bit of a moot point ATM.First lets see the price of these iGPU's come down before we count it a success.
ZeDestructor - Tuesday, June 2, 2015 - link
This who multipart review is slightly alarming...On the CPU itself: an IVB repeat: very minor CPU improvements, large GPU upgrade...
Oh, btw, since the CPUs are unlocked, can we get some identical speed benchmarks? Would be the nicest, easiest way to track IPC and eDRAM improvements.
Refuge - Tuesday, June 2, 2015 - link
The article mentioned them having problems with Beta Firmware stopping them from having overclocking and iGPU comparisons in this email.ZeDestructor - Wednesday, June 3, 2015 - link
I know. I feel that they should have pushed an early article out explaining the issues and published the review later on in one piece. You don't give good firmware, you get later coverage.vFunct - Tuesday, June 2, 2015 - link
The MSI page takeover ads here leave ZERO margins from the text..Ryan Smith - Tuesday, June 2, 2015 - link
On it. Sorry about that.Shadowmaster625 - Tuesday, June 2, 2015 - link
Wow so anyone who buys one of these chips is spending half their money on a useless GPU that cant even beat a lowly R7 250. That is $120 to $180 totally wasted on GPU, which occupies half the die. Talk about a massive intel tax. What happens if intel only offer a K version that contains half the die wasted by this useless GPU? How many people are going to just suck it up and buy it even though half the chip will never be used because they will be running a real graphics card?TEAMSWITCHER - Tuesday, June 2, 2015 - link
These chips really only make sense for high end laptops like the 2015 MacBook Pro - which ironically doesn't use them. It boggles my mind that Intel is shipping so many transistors that go completely unused. It's the antithesis of Moore's Law - Intel silicon is HALF-USELESS.PubFiction - Wednesday, July 1, 2015 - link
That's because intel only cares about mobile now, this stuff isn't made for us its hacked to work for desktop users this stuff is all about mobile. Personally I deal with largely because I am just happy that people who buy stuff like macbooks can now actually have a chance of running boot camp and playing games. In the mobile work igpus have always been a big part of the scene. Also the better intel does with integrated graphics the more they are able to kill AMD/NVidia which is what they really want to do, slowly and steadily eat the bottom end of the GPU market out from under them. It used to be that ANY discreet graphics on a laptop was WAYYYYY better than integrated. But after intels 2nd gen core series the bottom X1XX and X2XX gpus seemed to not make any sense, and intel has been getting better to the point now that X4XXX gpus are starting to not make sense. This screws graphics makers into only being able to sell higher end X5XX + GPUs and they destroys their bread and butter money.bill.rookard - Tuesday, June 2, 2015 - link
As much as I hate to say it, I agree. While their new iGPU is beating AMDs iGPU, the place where such iGPUs make sense is in small HTPC scenarios most of all (apart from budget gaming laptops which have a completely different thermal restriction). The kicker is though that the pricing is far too high for even being considered for what amounts to a media playback machine. It's thermally too hot for a laptop scenario.If they had paired up the iGPU with a G3258 CPU core set and the Crystalwell DRAM, and priced it near AMD's offerings, THAT would be a very compelling product.
Refuge - Tuesday, June 2, 2015 - link
I'd buy that.extide - Tuesday, June 2, 2015 - link
Uhhh, maybe you need your eyes checked, but it is beating the R7 240 in all except one of the benchmarks...MikhailT - Tuesday, June 2, 2015 - link
Dude, go re-read the graphs, Intel is beating R7 in almost all benchmarks.Namisecond - Wednesday, June 3, 2015 - link
Actually, the GPU takes up over 60% of the die space on the first chips, there is a 2nd piece of silicon comprising the EDRAM that take up a not-so-insignificant piece of real estate on the chip.The situation with AMD APUs are similar, about 40-45% of their die space is GPU. When they go HBM, they will in a similar situation to Intel, and they'll need to charge much higher price to make up for the tech.
If you want lowest cost/value for CPU, get a Pentium, particularly the anniversary edition. They're cheap (I can get them for about $50) You can overclock the shit out of them and their IGPU only takes up about 40% of their die space. If you need more CPU power in the Socket 1150 format, get a Xeon E3 which has no IGPU...they are cheaper than Core I7, but they cost more than core I5.
der - Tuesday, June 2, 2015 - link
Awesome stuff! Killer chipset!der - Tuesday, June 2, 2015 - link
50th comment!chizow - Tuesday, June 2, 2015 - link
Not really sure how appealing this will be for anyone on LGA1150 for the desktop, given Skylake is just around the corner. Certainly more appealing to heavy duty laptops, maybe NUCs for the better GPU capabilities but the prices are too high compared to low-end CPU + dGPU options (Alienware Alpha at $400-500 comes to mind).CuriousBeing - Tuesday, June 2, 2015 - link
I could never understand why the FX-8350/FX-8370 are never used in these benchmarks....Refuge - Tuesday, June 2, 2015 - link
It is probably because of the new test setup. They haven't re-run everything yet.Not that I consider that a good excuse, I know they are busy though and it is an answer to your question at least if that helps. :P
junky77 - Tuesday, June 2, 2015 - link
Broadwell is not for users who want high integrated GPU performance or something like thatIt's an upgrade root for many with Haswell
alacard - Tuesday, June 2, 2015 - link
Ian, buddy, you really need to step up your game when it comes to analyzing power, temperature, and noise. Seriously, Anandtech used to be a place where you could read a review on a product and have all the information you needed about it and now once i'm done reading an Anandtech review i have to look elsewhere to get the full story.Old Anandtech: Comprehensive and comprehensible.
New Anandtech: Comprehensible only because the reviews have become utterly incomprehensive.
Step it up buddy.
Navvie - Thursday, June 4, 2015 - link
A bit harsh, but agree with the point. I'm now waiting (hoping) that somebody at Ars or TPU gives a more comprehensive review.This Delta power consumption shit has to go as well.
Harry Lloyd - Tuesday, June 2, 2015 - link
So the actual CPU part takes up less than half the die. My forehead cannot take much more of this, there are just to many facepalms these days.And this is what they want us to pay $276 for - a CPU that would take up much less than 100 mm2 and should cost $100.
God, please, let Zen be a good CPU, please. I will pray every day, I want Haswell to be my last Intel CPU for a long time.
I would just like to point out, than an i3 with a 750 Ti will destroy this APU, offering PS4 performance in every single game, for pretty much the same cost.
Namisecond - Wednesday, June 3, 2015 - link
Don't expect Zen to be a some "cheap chip" AMD has stated they're going to focus on performance rather than cost. I'm expecting Zen with HBM to cost as much as comparable Intel offerings.Rather than complaining about the cost of new cutting edge hardware, put your money where your mouth is and get the i3 + 750TI. I have one here and it serves well as a secondary machine (as well as a doorstop). I'd much rather use my 2500K + 960...
shelbystripes - Tuesday, June 2, 2015 - link
I'd really love to see benchmarks of Civilization V on this thing. With such a CPU-intensive game, it'd be interesting to see how much the L4 cache makes an impact, not just with integrated graphics, but also when using dedicated graphics, to see how much the L4 cache helps the raw CPU performance in a game that is so easily CPU-constrained...Peichen - Tuesday, June 2, 2015 - link
I think Civ 5 is still single thread or maybe dual thread and have to process everything in order so each term still take minutes. I have the game at launch and all packs & DLCs and it doesn't stress my overclocked 3770K a bit. No core hits above 40% yet a term still takes forever. The game isn't 64-bit either so there is also that.Jumangi - Tuesday, June 2, 2015 - link
Completely worthless release for gamers and PC enthusiasts...another year goes by with no reason to upgrade to a new CPU. Were officially back to the dark days before AMD kicked Intel in the nuts with the Athlon64 and made them have to compete.So sad...
Peichen - Tuesday, June 2, 2015 - link
Skylake is coming in 2 months for gamers. Today's release is for system builders that needs a stopgap. You shouldnt buy into AMD's PR campaign that you need to buy AMD to support innovation. Intel is doing that all by itself.PubFiction - Wednesday, July 1, 2015 - link
I agree 2500K and 2600K here haven't had a single reason to even be tempted to upgrade. The only good that has come of this is that now days I have gobs of cash to spend on GPUs, I guess this explains why NVidia can charge $999 for a GPU now and sell out.jjj - Tuesday, June 2, 2015 - link
Sad that Krzanich continues with this strategy of making products nobody needs and abusing their monopoly to charge way too much.Seriously, just 4 cores and a GPU that can't even do 1080p at those prices? Same die they could have fitted 12 cores and no GPU but we'll never get that because Intel has no interest in making good chips and regulators are all dead.
They'll choke on it, it's unavoidable.
TallestJon96 - Tuesday, June 2, 2015 - link
I only see this being good in all in ones and mini PCs. I fully expect Apple to announce a broadwell refresh for their iMacs before the year ends.Too bad, because I'm looking to upgrade from an i3-2120 to a low-ish power i5, and this fits the bill except that the integrated graphics drive the price through the roof. I'm hoping for a 65w vanilla i5 that performs well, and this is close, but not quite right.
MikhailT - Tuesday, June 2, 2015 - link
I somehow doubt it. They already refreshed the iMac a bit and dropped the prices for the year. They doesn't update as often as everyone else, they're happy to wait.I expect Apple to go full Skylake next year.
Peichen - Tuesday, June 2, 2015 - link
Well, AMD should pay or sue Intel to keep Intel from integrating GT3e GPU into sub $150 CPUs. That would kill all AMD's market above $80.MikhailT - Tuesday, June 2, 2015 - link
WTF, or how about AMD complete to make better stuff instead.Peichen - Tuesday, June 2, 2015 - link
AMD cannot compete. They are so far behind Intel on R&D and even vision and theories it has no chance of catching up to Intel any time soon. The CPU department is kept afloat by the GPU department and that isn't going well either.silverblue - Tuesday, June 2, 2015 - link
AMD doesn't have the money. The main reason it's behind the curve is because they couldn't exactly bring out something to replace Bulldozer and as such had to milk it for all it's worth so it wasn't a complete waste of time that could'v dragged them under. If AMD had the resources, I think we'd have seen a replacement for Bulldozer by now.AMD's GPU tech is hampered by the process node and the lack of eDRAM/HBM. Carrizo may make a large difference in terms of power consumption plus a lesser difference in terms of performance, but AMD stands to benefit a lot from throwing some memory on die. Coupled with better compression, they'll have a more competitive product, but I think Iris Pro 6200 has the leader for the next 12-18 months (even if Carrizo does come to the desktop, will its performance be that much improved over Kaveri? I'm not so sure).
I think we need high quality tests for the iGPUs to see how that makes a difference. Intel may lead by even more.
HotRod917 - Tuesday, June 2, 2015 - link
Hahaha.."lay the smackdown"..on Broadwell's candy A#$..! good one ;)Asomething - Tuesday, June 2, 2015 - link
So can you guys include 2400mhz ram in part 2 gaming benchmarks? it would be interesting to see what both sides gain from the faster ram.bobjones003@gmail.com - Tuesday, June 2, 2015 - link
Well I see no reason to upgrade yet maybe when skylake drops. My sandy bridge(i 5 2500k) will go for another generation. The cpu runs overclocked to 4.7 ghz at this I could save boat load of power on the system.Shadowmaster625 - Tuesday, June 2, 2015 - link
This review is also bunk due to the fact that they are usign a DDR3 card when even the same card with GDDR5 is way faster and only marginally more expensive. No one in their right mind even buys a gpu with DDR3 onboard.bloodypulp - Tuesday, June 2, 2015 - link
5 Years after the first AMD APU with graphics on-die, Intel finally has graphics on die.Well done Intel. *slow clap*
Peichen - Tuesday, June 2, 2015 - link
And AMD market share on CPU and GPU have been going from what to what in those 5 years? Kinda like how Bulldozer was the first "true" quad-core but managed to run at 60% of the speed of "fake quad" Q6600 and being hot as hell.bloodypulp - Tuesday, June 2, 2015 - link
Process advantage, much of it.olafgarten - Tuesday, June 2, 2015 - link
Technically Intel did graphics on-die first, they did it back in January 2010 with HD Graphics on the Clarkdale and Arrandale. It was in June 2011, when AMD released their first APU.Ryan Smith - Wednesday, June 3, 2015 - link
Clarkdale was a separate GPU die, on-package with the CPU. Intel didn't integrate the GPU on to their CPU until Sandy Bridge in January of 2011.HotRod917 - Tuesday, June 2, 2015 - link
Hey whats the whole"Thank you A for X
Thank you B for Y"...huh, whats goin on AT? Dare i say sarcasm towards manufacturer's being cry babies for not thanking them enough.
And speaking of manufacturer's...AMD its high time u pull a miracle (zen) and 'bring it, I mean the monopoly from the blue team is just ridiculous...for god sake AMD plz. Also ARM and Samsung ur welcome to join the desktop x86 party anytime..and the sooner the better.
Meanwhile i'll be rockin my delidded 3570k till skylake is out
Peichen - Tuesday, June 2, 2015 - link
Intel is ridiculous in what way? You know Skylake is out in 2 months and that's why you are waiting for it. If you think Skylake isn't going to deliver why bother to wait? Intel haven't failed to deliver since Pentium M unlike AMD which is just one dropped ball after another. Sit tight and is gamers will have Skylake in no time. Today's release is for system builders that need a stopgap product, not us.Khenglish - Tuesday, June 2, 2015 - link
Why is the L3 cache only 6MB? Is 2MB disabled, or did intel cut the L3 size to reduce latencies since there is now an L4?Ryan Smith - Wednesday, June 3, 2015 - link
i5 processors typically have some of their L3 cache disabled.HollyDOL - Tuesday, June 2, 2015 - link
First time in years I am not sure what to think about this new cpu... Is it good? Is it delivering? On some things it quite rocks, on other seems like unfinished product/partial release... guess following months will tell...Valantar - Tuesday, June 2, 2015 - link
Just wondering: As previous Iris Pro vs. other integrated graphics reviews have shown Iris Pro to perform substantially worse at higher resolutions, are more tests above 720p coming in part 2? Or are you just going for whatever setting gets reasonably playable frame rates?sonicmerlin - Tuesday, June 2, 2015 - link
Why the frack didn't you test a midrange card at 720p so we can compare it to Iris Pro? They can't even do 30 fps at 1080p on most of the games you tested. What were you thinking?!bobhays - Tuesday, June 2, 2015 - link
On the low end gaming benchmarks for GRID, when benchmarking the r7 240 the minimum FPS for the A10-7700k is higher than the average FPS which doesn't make sense.bobhays - Tuesday, June 2, 2015 - link
Also wish they had tested dual graphics with a r7 240. The A10 ~ 130 + r7 240 ~ 70 = 200. That way we could compare a 200 dollar APU + GPU vs Intels new 200 dollar cpu'sOrphanageExplosion - Wednesday, June 3, 2015 - link
I've tried it with an A8 7650K and I can tell you that the frame-pacing is horrible, and the variance in frame-rate is stupendous. Crysis 3, 720p, low - 60fps looking in one direction, spin around on the spot and you're looking at 20-30fps looking in another direction.ryrynz - Wednesday, June 3, 2015 - link
Might wanna fix up that 128MB eDRAM comment Ian.ryrynz - Wednesday, June 3, 2015 - link
Fairly certain it's 64MB on more than one of them.Ryan Smith - Wednesday, June 3, 2015 - link
All GT3e parts have 128MB of eDRAM. Did we mess up and put the wrong value at some point in this article?patrickjp93 - Thursday, June 4, 2015 - link
No, you're right. 64MB version of Crystalwell is coming to Skylake only as far as I'm aware.Laststop311 - Wednesday, June 3, 2015 - link
Wow Intel wont even let AMD keep the better iGPU performance crown. At least APU's had higher gaming performance going for them. Now they really have nothing. Plus this part uses less power and runs cooler than amd's igpu. Intel's engineering and process advantage is really showing. If I was building an HTPC this chip would be my go to. Can actually play any game with decent settings on it. Turn ur HTPC into a console PC gaming machine as well.The_Assimilator - Wednesday, June 3, 2015 - link
"Now they really have nothing."Couldn't have put it better myself. APUs were AMD's last card, and Intel just took that card out vof AMD's hand. If Zen isn't the second coming that's been promised (and I doubt it because of AMD's marketing track record), then AMD's CPU division is effectively dead.
Oxford Guy - Saturday, June 6, 2015 - link
That's what Anandtech would have people believe. In reality the FX chips like the $100 8320E (I paid $133.75 with an 8 phase power motherboard from Microcenter) is very competitive for things like h.265 encoding, Blender, and so on with Intel -- especially on performance per dollar.So, Anandtech sticks 6 APUs in its charts and not a single 8 thread FX chip.
Oxford Guy - Saturday, June 6, 2015 - link
Example... top scoring APU in Cinebench multithread: 3258 thread FX at just 3.33 GHz: 540
at a more reasonable 4.28 GHz: 683
4.56 GHz: 724
4.78 GHz: 765
kevinkga - Wednesday, June 3, 2015 - link
I love it that you have a Linux section and I was even pleasantly surprised to see Redis benchmarks, which I use a lot! For my purposes I'm investigating using:1. MAXIMUS VII IMPACT MINI-ITX board (because it's the only ITX on the market afaik that supports M.2 x4. Other ITX boards seem to only support M.2 x2 although they often don't make it clear.)
2. Samsung XP941 128GB
3. M350 ITX case
4. Fedora 22 OS
xchaotic - Wednesday, June 3, 2015 - link
"Samsung XP941 128GB" - a bit small I think - best to get 256GB straight away.boozed - Wednesday, June 3, 2015 - link
A salient question perhaps, but I wouldn't go calling it a poignant one.Zingam - Wednesday, June 3, 2015 - link
Well, nothing to see here! Move along, sir!Where is the Skylake?
ryrynz - Wednesday, June 3, 2015 - link
Ian, what driver was used? Intel just released a new one with some significant performance improvements (15.36.21.4222) Retest?Phartindust - Wednesday, June 3, 2015 - link
Please fix the graphs for GRID Min frames, or did the A10-7700k really beat everything else by 10fps?unityole - Thursday, June 4, 2015 - link
"due to time constraint we will save overclocking for part 2" care to explain why both you and tomshardware don't do OC? i have a feeling intel not allowing reviewers to do OC at first for sample, until at least few weeks or even months later.can't blame you though, need to continuously get sample from intel.
unityole - Thursday, June 4, 2015 - link
im extremely interested in OC, voltage used, power consumption and also performance is what we're after, before moving onto skylake.Notmyusualid - Thursday, June 4, 2015 - link
I'll second the request for identical-speed benchmarks too, thanks.albert89 - Thursday, June 4, 2015 - link
I've got to say I like what Intel has done to its iGPU side of the APU no matter how you read this review it actually doesn't look good for them. Overall Broadwell from these results seems to be between 1.5-3 times more expensive than AMD and if that doesn't both you, then its performance in the games above was between 1.6-6 points ahead while in one game beating AMD by 10 points (not good) while AMD beating Intel by 10 fps (nice). And don't forget we are comparing AMD (28nm) against Intel (14nm) which casts a not so flattering picture of Intel. And need I mention the latency gain over AMD, hardly to write home about by Intel. If your an all round user or a gamer the only thing you'll notice from Broadwell is the hole in your wallet.unityole - Thursday, June 4, 2015 - link
true that, most people here are for CPU performance though, doesn't hurt to have additional good IGPU in case of discrete dies on you, can still boot without dGPU and iGPU can still do things at 1080p.NvidiaWins - Thursday, June 4, 2015 - link
Literally no gain over Devils' Canyon.......zodiacfml - Friday, June 5, 2015 - link
Interesting. I could see a particular niche for this for users needing very good cpu performance and wants occasional gaming that is better than an entry level graphics card. Can't wait for overclocking results.I hope prices go down with Skylake's...
iTon - Saturday, June 6, 2015 - link
i've think AMD products best when used Mantle API. it can boost up to 30% than DirectX API. Otherwise 10% parts of AMD APU had TrueAudio DSP Processor. so it is not 100% CPU and GPUOxford Guy - Saturday, June 6, 2015 - link
Six APUs and not one FX chip in the charts.LAME
Oxford Guy - Saturday, June 6, 2015 - link
Heaven forbid someone might see that you can get a better Cinebench multithread score from a $100 8320E than from a pricey Intel...Oxford Guy - Saturday, June 6, 2015 - link
8320E with 1600 8-8-8-24 RAM at 1585 due to FSB, Cinebench 15 multithreadone core per module, 4.28 GHz, 399
one core per module, 5.02 GHz, 467
3.33 GHz, 540
one module disabled, 5 GHz, 590
3.8 GHz, 608
Intel i7-3770, 3.4 GHz, 662
4.17 GHz, 664
4.2 GHz, 675
4.28 GHz, 683
4.41 GHz, 703
4.56 GHz, 724
4.68 GHz, 743
4.78 GHz, 765
Intel 4770K at 4.4 GHz, 822
Clearly, an overclocked Intel will beat this chip in this test but if you have an Intel that can't be overclocked that may be a different story, depending on the stock clockspeed of the chip. Also, Cinebench is, as far as I know, a test that favors Intel since it relies heavily on FPU. None of the APUs are close to FX's performance so the lack of even one 8 thread FX in the charts is not good.
The single thread performance of FX is particularly weak but it can be improved quite a bit by setting the chip to 1 core per module because high clocks can be achieved with much less heat than when the full 8 threads are enabled. The downside is that multithread performance suffers (in tests like Cinebench that can use all 8 threads... perhaps not so much in programs that max out around 4).
single thread
A10-7850K, 92
4.2 GHz, 99
A10-6800K, 100
4.78 GHz, 111
5 GHz, 118
5.02 GHz, (one core per module), 120
5.21 GHz (only one module enabled), 122
5.17 GHz (one core per module), 122
Intel i7 3770, 3.4 GHz, 138
Intel i7 3930K, 3.3 GHz, 148
Oxford Guy - Saturday, June 6, 2015 - link
highest scoring APU in the multithread test:A10-6800K, 325
sireangelus - Saturday, June 6, 2015 - link
Anand, i'm going to heavly critizice your linux testing method.you go to great lengths to adjust and compensate for turbo modes, but on an ubuntu 11.04 it should completely ignore turbo modes since sandy bridge or does not even know what turbAdvances in the linux kernelo modes are, severely limiting both new intel and amd cpu. Also, the power profile is changed alongside the new intel pstate driver, that greatly improved performance under linux in respect to the past; also, new kernels are optimized for the eDram on the crystal well cpus. I suggest you start moving the whole testing platform to something more current or simply redo tests once a year with an updated kernel. You could use the phoromatic platform to automatize the procedure.
Ramon Zarat - Sunday, June 7, 2015 - link
Please add clock for clock comparison with older Intel CPU! Back to at least Sandy Bridge or even further if possible.ES_Revenge - Sunday, June 7, 2015 - link
So clearly Intel has now topped AMD's one remaining trump card--integrated graphics performance. Of course you're paying over 2x the cost for the privilege. One might argue that you're still getting a much better CPU at the same time, which is true, but most people just look at the absolute cost and the fact that the A10 is not all that bad a CPU.I think this becomes *much* more interesting if/when Intel starts producing *i3* CPUs with Iris Pro. Then it's really game over for AMD, unless Zen is something spectacular to allow them to catch up (or even get close) in CPU performance. AMD could also easily up the SPs in their APUs to something like ~1024 which would give them R7 265/PS4/GTX 750 Ti -class performance. The only problem is AMD hits another hitch there as they don't have the thermal/power envelope to do so given A10s can already throttle the CPU side when the GPU is in heavy use, in order to stay within TDP limits. AMD's lack of power and thermal efficiency with their GPUs (as competent as they are otherwise) also poses a serious problem in putting the same tech on an APU die.
Seems like AMD has to hit a homerun with both Zen for performance and their GPUs for efficiency in the coming year. However from everything we know about Fiji and the fact that the rest of the [GPU] lineup will be re-brands, it doesn't seem likely. Perhaps it's time to quit the CPU business, lol. Either that or release APUs with high-end graphics, boxed with CLC/AIO liquid coolers (or massive air coolers).
0ldman79 - Sunday, June 7, 2015 - link
I know it's a dead socket, but could we AM3+ users request a CPU be thrown in the mix?I keep reading, keep looking, the reviews look impressive, so I check the Anandtech Bench and my 6300 is still faster than the APU's.
In all honesty, that has to hurt AMD. Those of us that went with the "big boy socket" are still faster overall even though the APU has made big leaps in IPC and clock speed.
Fact is we're still out here and the only way to get faster across the board is to jump Intel.
ES_Revenge - Monday, June 8, 2015 - link
Well keep in mind your FX 6300 has two more cores (or one more module) and it has 8MB L3 compared to no L3 at all on the APUs. If they made a "6 core" APU with L3 on Kaveri/Godavari it would be faster.. at least until it starts hitting the TDP limits, lol. This of course is probably why there are no 6 or 8 CPU-core APUs--because they're already having problems with keeping TDP within limits with a 512SP GPU and 4-core CPU.Oxford Guy - Wednesday, June 10, 2015 - link
"I know it's a dead socket, but could we AM3+ users request a CPU be thrown in the mix?"It might mess up the charts, which clearly make it look like you need to buy an Intel CPU rather than a $100 8320E with a motherboard discounted $40 (Microcenter).
Spectrophobic - Sunday, June 7, 2015 - link
I think an i7 with an Iris Pro 6200 is just a waste of die space, considering that you should be using a discrete card. An i5 should be fine, but an i3 w/ 6200 would be great at HTPC and light media scenarios.Doubt they're gonna release another Broadwell though. Hopefully i3 Skylake get the Iris Pro treatment.
PauloBrazil - Thursday, June 11, 2015 - link
is missing the tests with DirectX 12DirectX 11 is already exceeded
DirectX 11 uses more single core
Romulous - Thursday, June 11, 2015 - link
Some reviewerstwhile correctly call UHD .. well UHD. I thought a site like anandtech would get this correct. Stop calling it 4K!crashtech - Tuesday, June 16, 2015 - link
OK, time for part 2, please! :)varg14 - Wednesday, June 17, 2015 - link
When is part 2?Staafk - Wednesday, June 17, 2015 - link
Also waiting for part 2 now. Any eta on that? :-)Also I don't see why people don't want integrated graphics on high end cpus. I am under the impression that the integrated graphics is able to work with the discrete graphics card with dx12, adding some fps at least vs a cpu with no integrated graphics?
tuxRoller - Sunday, June 21, 2015 - link
You might want to give the phoronix test suite a look.It is very easy to use and provides many, many tests.AnnonymousCoward - Thursday, June 25, 2015 - link
I don't get it. Someone please explain why Intel doesn't make a non-iGPU version to cut their silicon cost in half for those who don't want it.edwardtoday - Monday, July 13, 2015 - link
In the 2nd page, the first word of the paragraph above the "Broadwell-K, or is it Broadwell-C, or Broadwell-H?" header, "Ultiamtely", should be "Ultimately".ES_Revenge - Monday, July 13, 2015 - link
So over a month later...this seems like it was quite the paper launch. Not only have we not seen "part 2" of this review (with OC results), it doesn't seem like you can actually *buy* one of the CPUs anywhere.Best I seem to have found is Amazon but they show it's not even going to be in stock until mid-August! Won't Skylake be around by then? Intel really doesn't care about Broadwell I guess--you have to wonder why they even bothered.
tania420 - Thursday, July 30, 2015 - link
I like this. Intel always create dynamical product. For Intel lover here i7-4790k Reviews visit <a herf=” http://www.xiontech.net/”>here</a>tania420 - Thursday, July 30, 2015 - link
I agree, the obvious thing to do is use the price as it is during the time the article was written. For Intel i7-4790k reviews visit <a href=” http://www.xiontech.net/”> here</a>Gadgety - Sunday, August 2, 2015 - link
So I got the A8-7600 for $93 including shipping and 25% sales tax for the kids. The i5-5675C is $348 with shipping and the same sales tax. With the i5 the gaind would be 7-9 fps in GTA V, 6fps in GRID Autosport, and about 0% in Battlefield4 (A8-7600 34fps, 1920x1080, medium detail, with Mantle 14.6), for an additional $255, that is.felipetga - Wednesday, August 30, 2017 - link
Would it work on a H81 chipset motherboard?yeeeeman - Saturday, March 14, 2020 - link
Well, who would've known at that time of the review (year 2015) that this slow to bring-up process, that is 14nm will be in service so many years and still is today, year 2020.I think Intel should've reconsidered their plans for 10nm, after seeing how slow 14nm ramped, but probably management was somewhat communistic with the engineers.