Quite the read. Very informational. Anandtech has some of the best tech writers. True online journalism. Sometimes i miss that while reading tech blogs... You guys are a cut above.. at least one.
How do I downvote stupid crap like this "Tag team Intel fanboy puke." comment so that collectively we can see high quality comments without having to wade through the interturds as well? It really takes away from the best article I have read in a long time. Not because it is about Intel, but because it is about the state of the art.
Many people enjoy well written and informative articles. Are you telling me that if you wrote, you would not enjoy positive feedback from your readers?
Not sure about him, but I've looked into this article to figure power targets for haswell (especially interesting to compare to ARM crowd), NOT to read orgasmic comments about eternal wizdom of Intel's engineering...
I finally made it through this article...hell, I took a course in orgnization and architecture earlier this year and I didn't come close to understanding everything written here.
Still, it was a great read. Thanks for going to the trouble, Anand. :-)
What's great is that Anand's been doing this for 15 years, has hired new editors along the way, and the quality hasn't wavered. I'm glad they haven't polluted their front page with shallow tech blogging like other sites I once enjoyed.
I can't imagine this hobby without this site. I got into PC building just as it came online and have depended on it ever since.
I disagree. Ryan Smith's 660TI article had some ridiculous conclusions and went on and on about a bandwidth issue that isn't an issue at 1920x1200. As evidenced by the fact that in their own tests it beat the 7950B in 6 games by OVER 20% but lost in one game by less than 10 at 1920x1200. Read the comments section where I reduced his arguments to rubble. He went on about a dumb Korean monitor you'd have to EBAY to get (or amazon from a guy with ONE review, no phone, no faq page, no domain, and a gmail account for help...LOL), and runs in 2560x1440. If his conclusions were based on 1920x1200 like he said (which he repeated to me in the comments yet touts some "enthusiast 2560x1440" korean monitor as an excuse for his conclusions), he would have been forced to say the truth which was as his benchmarks showed and hardocp stated. It wipes the floor with the 7950B, just as the 680 does with the 7970ghz (yea, even in MSAA 8x) where they also proved only 1 in 4 games was even above 30fps...@2560x1600 with high AA which is why its pointless to draw conclusions based on 2560x1600 as Ryan did. Heck 2 of the 4 games at hardocp's high AA article didn't even reach above 20fps (15 & 17, and if bandwidth is an issue how come the 660TI won anyway?...LOL)
Ryan was reduced to being a fool when I was done with him, and then Jarred W. came in and insinuated I was a Ahole & uninformed...ROFL. I used all of his own data from the 660TI & 7970B & 7970ghz edition articles (all by Ryan!) to point out how ridiculous his conclusions were. When a card loses 6 out of 7 games, you leave out Starcraft 2 (which you used for 2 previous articles 1 & 2 months before, then again IMMEDIATELY after) which would have shown it beating even the 7970ghz edition (as all the nv cards beat it in that game, hence he left it out), you claim some Korean Ebay'd monitor as a reason for your asinine conclusions (clear bias to me), in the 6 games it loses by an avg of 20% or more at the ONLY res 68 24in monitors on newegg use (or below, most 1920x1080, not even 1920x1200, only <2% in steampowered.com hardware survey have above 1920x1200 and most with dual cards in that case), you've clearly WAVERED in your QUALITY since Anand took up mac's/phones.
I'm all for trying to save AMD (quit lowering your prices idiots, maybe you'll make some money), but stooping to dumb conclusions when all of your own evidence points in the exact opposite direction is really shady. Worse it was BOTH editors, as Ryan gave up (the evidence was voluminous, he wisely ran and hid) Jarred stepped in to personally attack me instead of the data...ROFLMAO. You know you've lost when you say nothing about my numbers at all, and resort to personal attacks. Ryan nor Jarred are dumb. They should have just admitted the article was full of bias or just changed the conclusion and moved on. With all the evidence I pointed out I wouldn't have wanted it to be in print any longer. It's embarrassing if you read the comments section after the article. You go back and realize what they did and wonder what the heck Ryan was thinking. He said that same crap in his next article. Either he loves AMD, gets money/hardware or something or maybe he just isn't as smart as I thought :)
Anand's last hardware article on haswell said it would be a "MONSTER" but it's graphics won't catch AMD's integrated gpu and we only get 5-15% on the cpu side for a TOCK release. 2x gpu doesn't mean much with it being 9 months away and won't even catch AMD if they sit still. OUCH. So basically much ado about nothing on the desktop side, with a hope they can do something with it in mobile below 10w (only a tablet even then). I was pondering waiting for the "MONSTER" but now I know I'll just buy an Ivy at black friday...ROFL. What monster? In this article he says Broadwell is now the "monster"...heh. Bah...At least I got to read this before black friday. I would have been ticked had I read this after it hoping for the desktop monster. Since AMD now sucks on the cpu side we get speed bin bumps for microarchitecure TOCK's instead of 25-40% like the old days. I pray AMD stops the price war with NV and starts taking profits soon.
If it wasn't for their advantage on the integrated gpu, they'd be bankrupt already and they will be there by xmas 2014 at the current burn of 650mil/year losses (they only have 1.5Bil in the bank and billions in debt compared to 3.5B cash for NV and no debt, never mind giving up the race to Intel who dwarfs NV by 10x on all fronts). AMD's only choice will be to further reduce their stock value by dilution of shares (AGAIN!) which will finally put them out to pasture. Hopefully someone will pick up their IP, put a few billion in it and compete again with Intel (samsung, ibm, NV if amd stock drops to $1 by then, even they could do it). Otherwise, my next card/cpu upgrade after black friday will cost $1000 each as NV/INTC suck us all dry. There stock is already WAY down in credit rating (B+ last I checked, FAR from NV AAA), and they are listed as 50% chance of bankruptcy vs. all their competitors at 1% chance (intc, qcom, nvda, samsung etc). The idea they'll take over mobile is far fetched at best. I see nowhere but down for their share price. That sucks. I hate apple, but at this point I wouldn't even mind if they picked them up and ran with AMD's cpu mantle. We might start getting ivy 3770's (or the next king) at prices less than $329 then! The first sale I've seen was $309 in my email from newegg this weekend and that sucks in 7 months. No speed upgrades, no price drops, just the same thing for 7 months with no pressure from a competitive-less AMD. Their gpu sucks compared to 660ti (hotter, noisy, less perf), so no black friday discount. You either go AMD for worse but savings or pay through the nose for NV. Same with Intel and the cpu. In that respect I guess I get Ryan trying to save them...ROFL. But prolonging the inevitable isn't helping, I'd rather have them go belly up now and someone buy the cpu and run with it before it's so far behind Intel they can't fix it no matter who buys the IP. I digress...
God that was painful to even attempt to read. :/ Comparing AMD vs. nVidia to AMD vs. Intel is foolish in the extreme (there's a rather significant difference in the cost/performance balance, where AMD and nVidia are actually competitors) so I feel justified in not reading most of that screed.
If you owned a site and could delegate reviews you don't find interesting (oooh boy, another 15-pound overpriced gaming laptop!), wouldn't you do the same thing?
Mmh, I've also noticed how Anand seems to have become quite an Apple fan. Don't get me wrong, I love his reviews, and Anandtech as a whole. But the fact that Anand always keeps talking about Apple is an eyesore to me. Particularly annoying in this article was how he mentioned "iPad form factor" as if it was the only tablet out there. Why not say "tablet form factor" instead? Would have been a lot more neutral. Also it seemed to confuse someone in to thinking Apple might be putting Haswell in to a new iPad.
Agreed. The Apple devotion has gone too far and the editorial balanced has been lost. The podcasts -in particular- are basically an advertising campaign for Apple and a thinly disguised excuse for Anand & Friends to praise everything Apple. So I do not listen to them.
The articles though -like this one about Haswell- are still worth reading. You still get as much gratuitous Apple references as Anand can throw in but there is also plenty of substance for everyone else.
It's not "devotion", it's simply an accurate description of the market. How many iPads are out there? 100 million. One tenth of a BILLION. One for every 70 people on the planet. Well over half of Fortune 500 companies use them. Hospitals use them. Pilots use them. Name one other tablet that comes close to that sort of market penetration. When Apple decides to make their own silicon for their devices, it's a big, big deal.
For the record, I don't have one. I just understand the significance of the 800 pound gorilla.
Let's see. I think we can agree that the Samsung Galaxy S III was the most important Android phone launch of the summer, so it should get comparable treatment if Anandtech was completely neutral. Let's compare the articles about the SGS III vs. iPhone 5
Doing a search on anandtech.com gives us 8 articles/news posts about the SGS III vs. 13 articles/news posts about the iPhone 5.
SGS 3:
Five news stories about product announcements Performance Preview article Preview article Review article
iPhone 5:
Why iPhone 5 isn't launched in 2011 article Analyzing rumours about iPhone 5 article New SoC in iPhone 5 article iPhone 5 Live Blog from the product launch seremony Three news stories about new features and product announcements iPhone 5 Hands On article Lack of simultaneous voice and LTE/EVDO article Analyzing Geekbench results article Sunspider Performance Analysis article Performance Preview article iPhone 5 Display Thoroughly Analyzed article
+ The upcoming iPhone 5 Review article + articles such as "iOS6 Maps Thoroughly Investigated"
Look at the difference. It's quite clear which device gets more coverage. And it's the same thing for older iPhones. Articles such as "Camping out for the new iPhone 3GS".
This is NOT equal treatment of all products. This is why my trust for Anandtech has started to slip. Yes, Anandtech still is the best place for reviews, but one really has to wonder if those reviews still are as neutral and objective as they used to be.
Apple's (iOS) current sales are only 20% of the overall smartphone market share, while Android is over 60%, so if either one of the two is largely irrelevant, it's apple.
Not everyone can afford the premium quality of an Apple product. They will have to settle for an inferior Android devices instead until they can afford higher quality products.
"Not everyone can afford the premium quality of an Apple product. They will have to settle for an inferior Android devices instead until they can afford higher quality products."
Ha ha ha! Well done, if you're screwing around.
But seriously, if you actually believe that, seek psychiatric help. :-P
And how many of the 20% is on one phone? Let me know when you figure out how to cover every single Androd-based device that hits the market in a given year.
Anandtech being a hardware site,its more inclined to keenly flow hardware devices with new architecture and innovations. iphone brings in 1, A new A7 chip design and a novel 3 core graphics core 2, A new 3 microphone parabolic sound receiving design(which likely will become the new standard) 3, A new sim tray design(which will also likely become the new standard) 4, New sony BSI stacked sensor (the 13 mpx version will likely be the rage next year). 5, The first time that we have a 32 nm LTE chip which will give all day usage. 6, New thinner screen with incorporated touch panel and 100 % RGB
I am not sure about samsung but can anyone enlighten me about S3's technical achievements?
Of course a company that releases one device per product category per year as well as one with the greatest mindshare is going to have more articles.
But what happens when you add up all Samsung phones against all Apple phones in a given year?
What happens when you don't count the small blogs that only detail a small aspect of a secretive product but count the total words to get a better feel for the effort spent per company's market segment?
I bet you'll find that AT spends a lot more time covering Samsung's phones than Apple's.
Also look at any other Apple product review. They are all ridiculously in-depth with analysis about almost every single component in the product. Macbook Pro with Retina Display got 18 pages, the 3rd gen iPad got 21 pages. Don't get me wrong, I like a proper review with everything analyzed, but it's only the Apple products that get these huge reviews. But compared to those massive Apple reviews, it's like all other products are just glanced over in a hurry. The new Razer Blade got 9 pages. Asus Transformer Pad Infinity got 8 pages.
Of course the iPhone articles are going to be longer and more numerous than GS3 articles.
iPhone releases come with new iOS releases and have their own eco-system.
Android phone releases use a common OS across them and therefore much of what's in one article doesn't need repeating in another.
Anand liking Apple is not our problem, I can see why people like them (not so much Anand) and that's fine, personally I dislike them (hate was originally typed, but was edited due to being incorrect), but still respect them and respect people who purchase their products (and pay for their litigation).
An entire page of comments talking about how Anand isn't allowed to like or talk about Apple products because you guys don't like is ridiculous, they're a PC company and should exist on a PC website.
Sure, but I'm talking about dedicating entire, long articles to such things as the iPhone display or why it doesn't have a certain feature and so on. The SGS III has a very interesting display, too. Still it didn't get nearly as much attention. Of course Anand is allowed to talk about Apple products. What I want, though, is Anand(tech) to be as thorough in reviewing other products, too, or then stop making those huge articles only about Apple products. Because that is biased. In the Macbook Pro Retina article Anand talked about the cooling system and the fan blades for one page. When I read any other laptop review on Anandtech, cooling is briefly described in a sentence or two. Dedicating so much attention to just one company's products makes it look like Anandtech is biased. And that is not good.
Android products would get more coverage if they bothered to do any engineering on them. Since they don't push the technology the way Apple does, they don't need a more in-depth review.
You're kidding right? Hardware wise Apple has always been behind the curve compared to the competition in every facet of it's product line-ups or very quickly beaten.
Umm, I would disagree there. Apple has always been ahead of the curve in GPUs and this is the FIRST TIME SINCE BEFORE THE A-SERIES that Apple has had a GPU without an overwhelming lead on the competition for more than half a year.*
While GPU selection isn't always huge, it's one of the biggest points of differentiation in mobile chips, along with power use.
*excluding the A4 if you count from when it was first in a phone as opposed to in a tablet.
The last time I played a game on my phone was about 8 months ago and I'm 15! To say that Apple pushes their hardware is naive as it gets. The Galaxy S III was the best purchase Ive made, even my mum doesn't like my iPhone 4.
The bit that aggravates me the most is that even with this lavishing of review pages, the actual comparison of Apple products to competitors tends to lack (particularly with the Macbook article). This is understandable under some circumstances (iPhone battery life - new test, small selection of data points) but not for others.
I'm not really seeing any of that. AT's Android and Windows Phone reviews are just as in-depth and complementary where due as their Apple ones. AFAIK both Anand's and Brian's daily-driver phones aren't iPhones, even. They care about the tech, not who it comes from. It just happens that Apple is often the original source of new and interesting things in that space. At this exact moment they're the only people shipping something new and interesting. When the Nokia 920 launches, I'm confident Anand and Brian will be ready with a 15+ page review and discussion of anything novel on the podcast, and when Winter CES brings us Tegra 4 and other Android news, I expect to see eye-glazing levels of detail here at AT.
(As an aside, I smiled at how closely DPReview's discussion of the alleged "purple haze" problem tracked Brian's rant on the podcast - clearly both writers know what they're talking about, which can be a rare quantity in tech journalism).
I think Anand's daily driver is an iPhone, but he frequently carries the latest Android/WP device on the side. Brian and myself end up daily driving like a half dozen phones a month, depending on what shows up at our doorstep.
"iPad 3 form factor" was used because all of the other tablets have 25Wh batteries and draw about 5W max. The A5X iPad and it's giant 42.5Wh battery on the other hand can put out over 10W of heat which is the power envelope where Intel might target a Haswell SOC.
I totally agree with you on the Apple part. That's the biggest pullback on reading Anand writings. Too much Apple praising.
I used to be an Apple fan, but recently they're becoming the biggest jerks in the technology industry. The human/ethical part of in me hates them so much, that I won't buy anything that has a Apple logo on it.
I gave away my iPad 2, switched to Samsung Galaxy S phones, and using my HP windows 7 laptop over the 2011 MBA.
Probably because most people know about how large an iPad is - if he said "tablet" form factor that's ambigious.. and if he said "Motorola XOOM" form factor not as many people are familiar with the size.
100% agree on the well engineered part especially on the antenna gate when Steve God was saying "you're holding wrong", plus the recently ingeniously designed sapphire glass lens camera when Tim Schmok was saying "stay away from bright light source". Boy, Apple products must be engineered straight from the heaven; they are just too perfect for a mere earthling to use.
@stop-a. Since you are a 100% Apple hater, let me ask you this what computer do you use? And what OS do you use on it? I hope it doesn't crash several times a day. I use a MacBook Pro 2012 and I don't see anything come close.
You really shouldn't use the 'crash several times a day' piece anymore. I'm annoyed every time I see this. My Windows 7 machine has an uptime of 20 days and counting. Most of the time, it waits for me to connect to it via SplashTop or FTP, or it's recording TV shows, but when I play games, I stress it bigtime. Seriously, stop with the Windows constantly crashes crap. It's just plain false now.
P.S. - 20 days ago, I brought it to another house, thus the interruption in uptime.
I have a Dual socket 2011 motherboard with dual Core i7 3930K's both chips clocked to 4.6ghz, 32gb of ram, Triple Radeon 7970 3gb cards powering 3x 27" Dell U2711 monitors in Eyefinity.
Kay go. Lets see if your Mac can keep up or a Mac workstation at the same price. (Hint: Not going to happen.)
Besides, Mac's look ugly, I prefer the whole she-bang of a side window with a nice water cooling loop and having the whole thing light up, not some dull silver box.
Plus, my system is completely stable. Never had a crash yet with Windows 7 and... I have access to the last couple of decades worth of software and games, not to mention emulation of other platforms.
I can also pretty much find software and hardware easily and it will "just work" I never have to ask the question of: Will it work on a Mac?
Two Core i7s working on a dual 2011 socket motherboard? You need QPI links for that which only certain Xeons have. Sounds like your system will just NOT work !
Let's not forget the obscenely high failure rates due of Macbook Pro's because they are huge, metallic, and yet refuse to have vents ruin the smooth awesomeness of their aesthetic.
Whoops, for many it won't last more than two years, if that. Hell, if you're lucky, your battery will give out before your laptop cooks. Regardless, look up what Apple suggests and you'll get:
I agree. admittedly I am not an apple fan and view them as people who have undergone a degree of brainwashing compounded by the need for some to keep up with the Jone's. A certain degree of mind control must be necessary to stick with a company that has had some questionable business practices as far as customer relations, dealing with product issues and denying said issues, not to mention the whole hypocritical stance by apple in regards to copyright infringement has also left a bad taste in my mouth.
Disagree, not that much new from already published IDF reports almost 1 month ago. What is intresting is the claimed 40 EU GT3, other sources say lower amounts.
One can't be biased !@# !@#@ and a good journalist at the same time. One needs to be blind not to see how glass is always half empty for AMD, and half full for nVidia/Intel. F**!@#'s were shameless enough, to test 45W APU with 1000W PSU and such crap is all over the place.
As I was reading this article, about part way into the low platform power sections I suddenly had this thought: "Oh man, AMD is gonna die...!"
I don't know if that's true for the entire microprocessor side of AMD, since they look like they're already starting to transition out of the desktop space, but I don't know if they're going to stand much of a chance if they're planning on entering the same TDP range as Haswell.
Do you think there's a chance AMD will start focussing on designing ARM ISA cores? Or will expanding on their x86 Bobcat-type cores be enough for them?
I also worry about AMD. AMD has been 1-2 steps behind Intel for a while now, and now it seems Intel is at least 1 or 2 steps behind ARM and the future. Is that going to mean AMD is just too far behind to stay relevant now? If nothing else, i suppose AMD can fall back on graphic cards with it's ATI acquisition.
If Haswell keeps x86 relevant in the tablet space and thus Windows 8 has the upper edge over Windows RT and Windows tablets can grab +-50% market share from the iPad, then it can be good for AMD, provided they survive that long.
If AMD can create a team to focus on increasing IPC with a goal to one up Intel and have the ATI graphics people keep doing what they do with a time goal of say 2 years, (Note: Portables/Notebooks/Desktops should all be x64 by then), then I think that AMD will be able to return to their Athlon 64 glory days or better.
However, allot of the R&D Intel spends is on lithography type technologies, AMD doesn't have to spend Billions on such things anymore.
Besides, a simple way for AMD to beat Intel when Intel is a node ahead is to throw more transistors at the problem which they have succeeded very well at doing in the past. Mind you, that comes at the cost of power and die size, however with stuff like clock mesh it can negate some of that.
Being four steps behind ARM isn't necessarily a bad thing unless you're trying to leap frog them. AMD appears to be content with letting Intel spearhead the effort to get into the ultramobile market. With Intel two steps behind of ARM and they couldn't leap frog over ARM, there is little chance that AMD would be able to do the same. It isn't just knowing what battles to fight but also when to fight them.
It was only when I was reading Jana Rutkowska's notes on the current UI limitations within Qubes, that I finally understood (I believe!) the message which AMD has been pushing for quite a few years now: GPU compute will truly be an integral part of their future APUs in one or two generations, becoming almost an augmented instruction set instead of just a SoC.
Currently all Qubes "user" applications, that is everything except the Dom0, can't use the GPU to render their graphics: It's basically software rendering into an off-screen composition buffer and then GPU assisted composition of these software buffers onto the visible screen (this time with all the wobble and transition effects we've all come to expect and love ;-)
That's because although the GPU is on the same die even on the newest Trinity class APUs, it's still logically very separate, sharing only some stuff but bypassing, I believe, the ordinary page tables (not the IOMMU ones) and the snooping logic for caches. So even if GPU and CPU sit on the same die and use the same phyiscal DRAM bus, doing GPU compute implies using a dedicated part of that RAM in a way, which doesn't mesh seamlessly with CPU compute.
But the roadmap seems to imply, that this limitation will go away, which would allow e.g. Qubes to use GPU assisted rendering anywhere in user space memory and thus also into a per DomU virtual framebuffer composed of quite ordinary paged virtual memory, which could then be assembled by the Dom0 for the visible screen or for video encoding and streaming to a remote display device e.g. for cloud gaming.
This easy feeding of GPU "results" into another software layer is currently either impossible or requires major fiddling with device drivers so it's limited to the GPU vendors and bilateral deals such as nVidia and Splashtop. Once the GPU becomes more of an augmented instruction set, allowing OpenCL or even hardware primitives on ordinary user space paged virtual memory, this becomes as natural as running virtual machines with hardware virtualization.
And at that point even the new 256bit FMA may look pretty lame compared to what hundreds of APU EUs could do. That to me explains rather well, why AMD isn't spending more transistors on a vastly improved CPU only x86 ISA: It truly belives it's a dead end for both personal and scientific workloads.
It's a very daring bet and I very much admire them for having the vision and the balls to tie the company's survival to it. Over the last 40 years Intel seems to have failed with most of its visions (80432, i860, Itanium), but excelled on evolving x86. AMD, however, seems better on vision and noticably 2nd rate on execution.
APUs are potentially quite dangerous both to nVidia and to Intel, because both can't easily duplicate them: The AMD/Intel cross licensing deal IMHO won't cover the GPU portion. Unless nVidia and Intel join, which would only happen if either of the two is in truly dire straights.
But quite a few things need to fall in place over the next couple of years and AMD needs to survive them for that potential to develop. And it looks like all ther other players aren't standing still.
Events like Apple potentially using Samsung augmented cash billions to turn TMSC into a private provider of 1x nm ARM SoCs are sending shock waves into the market, which may force "strange" alliances.
These days when even trival things like "swipe to unlock" can be patented and used to bloodlet competitors I'm surprised to see IBM and Intel use things like transactional memory, which saw silocon first with Sun's Rock, I believe, or Intel turning to eDRAM for caches and frame buffers, which IBM's been implementing first on the p-Series.
That leads me to an open question on the commercial workloads, which is almost the only domain, where I have difficulties seeing the immediate benefit of APUs, at least after Oracle's grab on Java and their expressed intent to make commercial workloads a SPARC exclusive (please see Larry's opening remarks on Openworld 2012): How can AMD make APUs the better Java and database engines? How can they make search, big data, map reduced or JavaScript run better on APUs?
I can only guess that having managed CPU+GPU AMD would be in a better position to add xPU for all of the above.
A great, detailed description of Haswell's architecture. I do have some questions though.
You mentioned that Intel will be including up to 1 redundant EU in the GPU array. Does that mean only GT3 will have the 1 redundant EU (41 total, 40 usable) with GT2 having no redundancy? Or is it 1 redundant EU per sub-slice, so GT2 will have 1 and GT3 will have 2?
Will the embedded DRAM be implemented PoP like in SoC? When you say we'll see a version of Haswell with embedded DRAM do will all GT3 have embedded DRAM or will only some GT3 have embedded DRAM (kind of a GT4)?
Given the long timescales of CPU design, there would be overlap between the Haifa team working on Sandy Bridge/Ivy Bridge (particularly Ivy Bridge) and the Hillsboro team working on Haswell. I was wondering if you knew how much opportunity there is for learning between consecutive designs in terms of magnitude of changes possible and timescales before things are pretty much fixed? I'm in no position to judge, but I was also wondering based on your knowledge of the architectures and/or interactions with members of the design teams if you sense any distinct difference in design philosophies between the Haifa and Hillsboro teams. Afterall, the Haifa team's background was in power-efficient, mobile-oriented designs whereas Hillsboro was high-performance, desktop/server oriented. You mentioned in the article that Haswell goes back to Nehalem's 3 clock domains due to lessons learned from Sandy Bridge/Ivy Bridge. While I don't doubt that's the primary reason, I wonder if design philosophy played a role too since Nehalem and Haswell are both Hillsboro designs and maybe they like 3 clock domains.
Unfortunately that's all the info I have on redundancy in the GPU array. I think we'll have to wait until we're closer to launch to know more. The same goes for the nature of the on-package memory.
I wondered the same thing about the correlation between design teams and decisions in Nehalem/Haswell, I refrained from speculating on it in the article because I didn't necessarily see any reason to doing so, but I definitely noticed the same correlation. It could just be a coincidence though. Nothing else beyond the L3 cache frequency really stood out to me as being an obvious common thread between Nehalem and Haswell though.
Speaking of the EUs, is the GT3 part twice as fast as the HD4000 with or without the eDRAM cache? The article seems to imply with, but then what is the performance without it if they've doubled the EUs? Doesn't it seem more likely they doubled performance without the cache, and the cache doubles it beyond that?
Anand, thanks for the insights. We all enjoyed it very much and look forward to getting the real thing into your labs.
To clarify some questions: As for the design team philosophy, the Hillsboro design team continually tries to outdo the Haifa design team and vice versa. Both teams have access to the other teams' design collateral, as we co-own the tick-tock model.
Next, the reasons for the "3" clock domains are too complicated (and confidential) to go into. Since designing for "2" clock domains is much simpler, the reason is not that we enjoy pain and misery. Suffice to say, that you are missing a very big piece of the puzzle and accurate conclusions as to why this was done cannot be drawn from the information you have. And the number of clock domains is in quotes because those are not accurate anyhow.
I'm curious as to whether Intel has enough interest to drive the Atom design low enough to hit ARM power level (like Medfield) and integrate an Atom core into a Core CPU design. nVidia introduced a heterogeneous CPU in their Tegra 3 SoC. (Two different ARM core types in the CPU block). From all the stuff I've seen about Intel over the past half decade, I'm pretty sure they have the resources to pull that off. They have top-notch designers and engineers with the basic tech and designs need to start R&D on that, I think.
On the other hand, if they really are trying to force a Core design in Atom territory... Well, hell ya ^_~ Still, I can't really see Core hitting the sub-1W power levels they've been able to do with Atom (Medfield). I figure using an Atom core for basic S0ix functions would be a little more power efficient than using a Core design, but I'm no silicon engineer. Intel would know about that far better than me.
It's been known for a while that Haswell was only going to have a moderate improvement in the iGPU and the next big overhaul would be coming with Broadwell.
This is impressive, it might convince me it's time for a new laptop. On the other hand I also need to build a new desktop workstation and Haswell so far hasn't impressed me in that space.
It feels that way to me. Mobile performance seems to be their big concern now, that and improving the GPU. Two things I generally can't be bothered to care about when I'm looking to build a new workstation. I suspect I'll build an Ivy Bridge system because I could use it now and see nothing worth getting excited about.
I fully share your sentiment. TO be very crude, i don't mind at all, paying for power imporvements, because it will pay back for itself in the long term (by consuming less power AND needing lesser cooling). But i DO mind very much, paying for 40 EUs of GPU on my desktop build which i will not use even for a second. Me, you and many others do not care about on-die graphics and Intel should realize that.
I don't know why intel can't offer us both GPU and GPU-less options, the way they did with motherboards back in the days? P965 had no graphics, G965 did. Pretty sure it's technologically not an issue.
If it makes you feel any better; reports elsewhere are that GT3 will be mobile only, because desktops don't have the power/size constraints driving the need for premium IGPs.
Intel's not IGP CPUs are the E series parts; unfortunately they've failed to execute on the enthusiast side in terms of price/launch date leaving them as mostly server parts.
There just aren't enough of us to justify Intel adding another die design for their mass market socket that doesn't have an IGP at all instead of just letting us turn it off and use the extra TDP headroom for more time at boost speeds.
Whilst I share a concern that Intel is no longer focusing on raw performance improvements in the purely desktop space, they are still delivering incremental updates to the architecture that will benefit all current software (even if only marginally). However, processor performance has been reaching more and more diminishing returns in recent years, namely that software is simply not able to take advantage of multiple cores and improved performance because of (primarily) locks and complexity in creating multi-threaded applications.
As such, Intel has been focusing on that area - to make it easier for software and software developers to take advantage of the performance that exists *now* rather than brute forcing the issue by simply delivering more raw performance (much of which will be wasted/remain idle due to current software constraints).
With this, Intel has been able to focus on keeping performance high whilst subsequently dropping power usage substantially - the fact the iGPU is oftentimes not being used in a desktop environment does not invalidate it's utility - QuickSync is a prime example of where the gpu can accelerate certain types of processing, and if more software takes advantage of this we should see even more gains in future.
For the last 6 years or so, Intel has shown that it knows what demands will be placed on future computing hardware, and they seem convinced that this is the way to go. We might not be there yet, but technologies like C++AMP, OpenCL and such make me hopeful that this will change in a few years.
I solved this problem by buying an Ivy Bridge Xeon (specifically, an E3-1230v2). No GPU, lower power consumption than the equivalent i5/i7, has hyperthreading, performs really good, and a lot cheaper than an i7.
If you don't care about the GPU, look to the Xeon line.
Woah! I did not even think of that. That is VERY compelling but i can't do without unlocked multiplier, so there is no perfect processor for me still :(
Or just go with a Socket 2011 Core i7 3930K like I have and do a little bit of undervolting and has no IGP's.
I think the reason why the Desktop space has seen decreasing/stagnant sales is simply because allot of people see no need to upgrade.
A Core 2 Quad Q6600 @ 3.6ghz, with a decent chunk of Ram and a decent graphics card is actually fairly capable of running almost every game at maximum settings.
Heck I know people who are perfectly happy sitting with a Pentium 4 for basic web use.
I think a change needs to happen where software catches up with hardware to give people a reason to upgrade and drive sales which might reinvigorate Intel and AMD to innovate.
Windows 8 and the next generation consoles might actually help in that regard.
I'm running a Core 2 Extreme QX6850 at 3.4ghz, 1066Mhz DDR2 Ram and a GTX295 and it still rocks all the newest games at or close to max settings.
Will have this system 4 years this November.(all except the GTX295, which was upgraded from a 9800 GX2), even now I'm thinking that was a waste of cash.
I've gone to upgrade at least twice each year, but can't justify it.
The only place I'd see returns is in the power costs, but hey, whats a few extra cents..... The system meets my needs, and forking out for a similar system today would cost around the €1800 mark.
Until the software can better utilize the components I'm holding out until Summer 2013, that'll be over 4 years I've gotten out of this system. Up until 2008 I slavishly upgraded every year or 2.
This (late) December, i will have had my i7 for 4 years, and i have not seen a single reason to upgrade. The GPU is 2.5 years old (GTX480, was 280 before that).
A x58 motherboard has 6 memory slots, and now houses 24 GB of ram for virtual machines, which can go 48 GB for a reasonable price.
I just don't see the need to do anything more, and this will probably fail from old age before i would need a drastically faster machine.
I don't mind power savings, the few times my system is idle it could certainly benefit but overall it would mean reduced consumption even under load. My system just doesn't spend enough time in idle with my Q9450.
Ultimately it does seem as though the software demand for faster CPU hardware has slowed and between that and the lack of real competition, so has the development.
If it weren't for the fact that I need more RAM or wanted faster photo processing (and may start doing some video) I'd probably keep what I've got a bit longer. My Q9450 hasn't held me back from playing any games yet. The 20% OC I've been running doesn't hurt but ultimately a lot of things just aren't CPU limited anymore.
You may think this as a result of all the low power talk, but Haswell is doing something rather important on the peak performance side. The increase in the size of the execution engine is important - adding in another integer ALU and another load/store means that in workloads that share INT and FPU performance (think loop counters which store an INT for loop iteration then perform some FP calcs) will improve. By increasing the bandwidth available and being able to keep the two FPU fed with info means a greater throughput as long as the bandwidth and thread switching can hide any additional L3 latency. Personally I'm thinking this may be a subtle move towards more threads per core in future architectures. Some of the non x86 are abusing 8 threads/core with improvement gains, so I wonder if that would be possible here. Ideally we would like every port on the execution engine to do everything, with a single pipeline feeding it and excellent branch prediction to help with single thread speed. Smaller nodes help with that silicon real estate, or someone will stumble on a better/smaller way to actually physically create these things.
I'm curious what IBM/Oracle's high SMT designs look like on the execution port side. As long as it's business as usual I doubt Intel will ever make all the ports do everything because it would just be hogging a huge amount of die area when the odds of each thread doing all of the same instruction type constantly are very low. Smaller bursts of one type can be spread out using OOOE.
Perhaps they also try to reach lower usable clock frequencies through performance upgrades and this way gain some additional voltage scaling, or what is left of it.
The high end desktop space was abandoned quite a while ago. The LGA-2011/Extreme platform remains as a way to somewhat address the market, but I think in reality many of those users simply shifted their sights downward with regards to TDPs. A good friend of mine actually opted for an S-series Ivy Bridge part when building his gaming mini-ITX PC because he wanted a cooler running system in addition to great performance.
To specifically answer your question though - the common thread since Conroe/Merom was this belief that designing for power efficiency actually means designing for performance. All architectures since Merom have really been mobile focused, with versions built for the desktop. I like to think that desktop performance has continued to progress at a reasonable rate despite that, pretty much for the reason I just outlined.
Well, LGA2011 is bit of a halo product with no real substance. An ivy bridge 3770K will stand up to a quad core LGA2011 part nicely, not to mention it supports PCI e gen3, so even though it has lesser lanes, it doesn't have a bandwidth disadvantage. Moreover LGA2011 is still stuck at sandy based architecture, so that again isn't quite on the bleeding edge and as far as i understand, Haswell will come out before IB-E does, so it's 2 full cycles behind.
For a single discrete GPU, Ivy Bridge would be able to match the bandwidth of Sandy Bridge-E: a single 16 lane PCI-E 3.0 connection. Things get interesting when you scale the number of GPU's. There is a small but clear advantage to Sandy Bridge-E in a four GPU configuration. Ivy Bridge having fewer lanes does make a difference in such high end scenarios.
For its target market (mobile, low end desktop), Ivy Bridge is 'good enough'.
Given that desktop software's not really been pushing for better CPU performance, the direction intel has taken is not a bad one IMO either. It's now possible to build a mighty gaming rig in an mITX case (Bit Fenix Prodigy), think 3770K and GTX 690 gfx and watercooled.
A rig like that will likely last 3 years before settings have to be tweaked to keep 60+ fps.
What's really needed is for software to take advantage of GPUs more, (which would play into AMDs hands), but I fear many of the best coders have switched from windows to Android/iOS development, With windows 8 shipping shortly, that number will increase further.
I for one always need more FLOPS, MCAD work and simulation work depends on two things memory bandwidth+size and flops, surprisingly AMD still offers a better vfm deal in this space thanks to avx instructions not being widely adopted into most FEA/CFD code yet and the additional ram slots you get with cheaper boards.
Server components are always overpriced as we dont need a system to last very long. my 3930k setup is about 1.5 times faster than the x6 setup at 3 times the cost... :(
This is a perfect demonstration of the power of competition.
With AMD struggling badly, Intel was content in pushing Atom. They didn't want to innovate in that sector, they sold 10 year old technology with horribly outdated chipsets. Yes, they were relatively cheap, but I was appalled.
Step in ARM, suddenly becoming a viable competitor. Now Intel moves its fat ass and tries to actually build something worthwhile.
Sadly, free markets are an illusion. Intel should pay dearly for the Atom fiasco, but they won't. Just as they didn't pay for the Pentium 4 debacle. They will come 5 years late to the party, but with all their might, they will crush ARM. ARM will fall behind, they can't keep up with that viscious tick-tock-cycle. Who can?
In 8 years, ARM will have been bought by some company, perhaps Apple. ARM will then no longer be a competitor, it will be just a different architecture, like X86. I don't see Apple having any long-term interest in designing their own hardware, it's way too unsexy. They will just cross-licence ARM with Intel and in 10 years time, Intel will rule supremely again.
You forget that Intel vs. ARM is something bigger than AMD vs. Intel. Behind ARM stand Qualcomm, Samsung, Apple, ... All new software is written for ARM, not Intel (x86) any longer. Microsoft releases a rewritten ARM Windows RT with a rewritten Office for ARM. Android runs on ARM and everyone supports the ARM version, while only Intel has to keep it compatible with x86. Haswell will get released, when exactly? In a year, ARM A15 in maybe two months. Haswell has nice power savings, but it's still a Ultrabook design. The current Atom SoCs are much worse than current A9/Krait SoCs. Intel heavily optimized the software to make it look not that bad (excellent Sunspider results), but they are. If Windows 8 is a success, Intel can be lucky. If it's not, what many expect, Intel has a real problem.
Intel is a single company building and developing their CPU/SoC. ARM SoCs get build and developed by a magnitude of companies.
If Apple can design their own ARM based SoC which has the same performance as a Haswell CPU (which is easy in the GPU area (the iPad has a faster GPU than the Intel CPUs most probably already, and with A15 and Apples A6 it's possible to get as fast with the CPU, too), they will be able to move Mac OS to ARM. This allows them to build a very very power efficient, lightweight, silent MacBook. They can port apps from iOS to MacOS and vice versa. Because they designed their SoC in-house, they don't have to fear competition the near term.
Apple always wants a monopoly, so it doesn't make sense for them to cross-license anything.
Unless your app is doing some serious math you can get by with just using a cross platform key chain. Frankly, the hard part is targeting the different apis that are, currently, predominating on each arch. However, assuming those don't change , and the form factor doesn't either, your new app should just be a compile away.
Current ATOM SOC's are not "much worse" than A9/Krait. Most CPU benchmarks running in native code will favor the Intel SoC. It's the addition of Android/Dalvik that leans the favor back to ARM. Android has been on ARM for a lot longer and is more optimized for ARM code. Android needs to be tweaked more yet to run optimally on x86.
Nearly all of the software on Android is junk. Apple blocks everything at a whim and gives no control. I don't know about Windows RT, but I suspect it will suffer the same manner of crap programs Android does if it's not already.
Even if people are more focused on developing for ARM, the ARM OSes are still way behind in program availability(especially quality). And it's downright sad seeing people charging money for simple, poorly coded programs that can't even compare to existing open source x86 software.
I agree competition is good/great. However, how you categorize Atom is just not true! Atom filled a very real niche. Cheap mobile computing. Not powerful, but x86 and fast enough to do basic tasks. I loved my Atom netbook and used it until it bit the dust last week. Would I have liked more power? Sure, but not at the expense of (at the time) battery life. Besides, once I maxed it out by putting in a SSD and 2 GB RAM, my netbook often outpaced many peoples' newer more powerful Core based laptops for basic tasks like word processing and web browsing.
Just because power users were unhappy does not mean Atom was a 'fiasco'. Those old chipsets allowed Atom netbooks to regularly sell, fully functional, for under $200, a price point that Tablets of similar capability are only just starting to hit almost 4 years later...
Don't bash Atom just because you don't fit into it's niche and don't blame Intel for HP trying to oversell Atom to the wrong customers...
My Samsung 9 series x3c (ivy bridge), have a usage looking on this page with wifi at bt on ranging from 4.9W to 9.9W from lowest to higest screen brightness, with a normal usage of screen of 7.2W with good brightness (using samsung own measuring tool).
So screen is by far the most important component on a modern machine. In the complete ecosystem i wonder if it matter how efficient Haswell is. The benefit of 10W tdp for say the same performance is nice, but does it really matter for the market effect. And the idle power is already plenty low.
I doubt Haswell will have an significant impact - as nice as it is. This is just to late and way to expensive for the mass market. Those days are over.
At the time it hits market dirt cheap TSMC 28nm A15 and bobcat successor hits the market for next to nothing, and will give 99% of the consumers the same benefits.
I hope i know it right. L3 on SB/IB doest used by GPU. L3 still servers as cache on system via memory controller. If GPU nneds to acess to memory, it sends request to memory controller. L3 is not directly accessable to GPU as a texture cache etc.On IB, they added a 512k cache which is seperated to half, 256k of it is used as texture system as backfeeding and other 256k half is used for other things.
Article implies that L3 cache on IB is used as a texture buffer like on ordinary graphic cards. Only on Haswell L3 cache will be accessable and can be used as a some kind of GPU specific buffer.
The confusing thing is that consumer Ivy Bridge parts have a L3 cache just for the GPU which is separate memory than the L3 cache that the CPU uses. The Ivy Bridge GPU's can use the CPU's L3 cache as the GPU's L4 cache to a degree.
To confuse things further, the CPU side really has four levels of cache too. There is the small 1.5 KB micro-uop cache for instructions which comes before the 32 KB L1 instruction cache.
It's not clear how much of the VR circuitry gets integrated into Haswell or necessarily which parts will have it and which ones won't. Ultra mobile is a shoe in, but I've even heard of desktop parts getting it as well. We'll have to wait and see.
Rats. Reading the article I was hoping that Intel had decided to only bake the VRMs into their ultra-mobile parts. Better VRMs are an important factor in high end OCing; with desktop boards not cramped for space I really hope Intel keeps them off the package.
However, I wonder whether the VRMs on high end mobos will still be an option, where the on package VRMs will simply extend the capabilities?
But given Intels recent distaste for overclocking, it wouldn't suprise me if we'll soon see CPUs completely locked from overclocking completely or only on E series, high profit chips.
Using lvds reclocking you can reduce idle screen induced wakeups to 30 (ditto for the memory controller if the cpu supports self refresh for the sram ). eDP may allow even less.
I derived immense pleasure reading the article. Thank you, Anand. Big ups for the comprehensive read. My thoughts : I think Intel really dropped the ball by not having unlinked clocks for each core, like qualcomm has for it's s4 pro processors. There are so many times that, for instance, i have a page open with some animated GIFs. They are strictly single thread processes and they won't let the processor go to idle state. And this is a very VERY common occurance that can IMO, only be solved by adopting unlocked states for each core. 3 cores can stay in sleep state (almost perpetually) and the processor runs on a single core with lowered frequency. THAT would be power efficient.
Uhh... isn't turning off unused cores and overclocking the 4th core within TDP to perform single threaded tasks exactly what Turbo Boost introduced in Sandy Bridge is?
Reducing power is great and also inevitable, but Intel's move to compete against everything and everybody is alarming. With everyone trying to follow/please Apple, that means nothing good for the consumer, throw-away luxury electronics for exceptionally well groomed masses. Also, isn't it too early to be hyping this stuff?
The ARM problem is not about the product but about price, long term the CPU/SoC ASP will drop hard ,there is competition now. Servers will keep them on life support for a while but without fundamental changes to their business model they can't make it. Intel should remember how they won the market .
Regardless of whether they step foot into that end of the spectrum or not (and by Anand's analysis that's more likely with Broadwell and on?), they still need to compete on price.
It's one thing to make a chip, it's quote another to make it competitive with respect to pricing. What works against a distant AMD won't work against ARM.
I agree. This seems to be something that most people overlook when addressing the Wintel monopoly. The costs of Wintel products are high within the PC/Laptop space. The price of ARM/Apps are cheap within the Smartphone/Tab space. How do Wintel square this circle without damaging their business model?
I really don't know how you can think Apple would ever start using Intel chips in their iPads when Apple has already proven they want to make their own chips with A6.
Also, according to Charlie, Haswell will be like 40% more expensive than IVB. Atom tablets already seem to start at like $800. So I wish Intel good luck with that. Ultrabooks and Win8 hybrids won't drop down in price any time soon.
I don't know how you could fail so much in reading comprehension, Anand only said the same flying spaghetti monster-damn form factor. Nothing else. There also must be an ecosystem, but if you can run the same app on a tablet as well as a desktop on x86 with more performance then ARM why wouldn't you see vendors use it. It is a full system even capable of building itself. It's not about killing ARM. Intel still uses it, they need fairly high-performance RISC chips for stuff like baseband. They had a large markets in smart-phones before 2006 and they made the choice to sell it because they had Atom in their lineup. They didn't forget about it.
It's Microsoft tablets that costs 500-900 dollars even on Atom, but they only need to compete with Windows RT which is totally retarded as far as corporate customers go and not the same system as 8 Pro, doesn't run the same software. An Android tablet could use a Z2460 (and coming Z2580, after that Valleyview SoC's) and build a 240 dollar tablet. There is no price difference to be had as far as hardware is concerned. Windows 8 tablets are a whole other form factor and device to begin with. Most will have keyboard and multitouch trackpad.
He only talks about the same form factor, size and battery life here. In the Microsoft ecosystem there is really no reason to go to Windows RT powered ARM-devices which doesn't have better performance and runs no third party desktop (Win32/Full Windows SDK) software. It also lacks the same features in other areas which makes them devices instead of general computing platforms. Remember they offer both here. Hell the built in email is even worse then the one built into Android since version 3.0 or so, it's a lot worse then Third party mail-clients in Android, it's worse then mail-clients in Blackberry 10, Symbian, iOS and so on. If your replacing a desktop your not going with ARM here, not on a Windows device at least, Anand only talks about a new bread of DTR Tablets and Ultra-portables that will fit in the same form factor and battery life scenarios as ARM-tablets. Apple certainly don't need to participate here.
Intel certainly has sales to be made if they move Haswell down to low-power Atom territory when it comes out later next year. They could be used as the only computing device you have (smartphone + hybrid tablet-pc). Replacing desktops, ARM/ATOM-tablets, media PCs for your TV (just stream with Miracast). Et cetera. ARM-devices would just be cheaper less capable devices there. But it's still different targets. Haswell still targets server (enterprise-market), desktop, notebooks with larger form-factor/power-usage, as well as more portable stuff. Atom is still for the handheld stuff you use with one hand. ARM has moved quiet fast but they have no reason to target high-performance applications or built 100W SoC's that is fast without parallel computing. Applications like high-performance routers for example still uses licensed and custom MIPS and PowerPC chips. There are plenty of markets where a full feature ARM Cortex or x86 won't work either. ARM is just moving into the multimedia-field, replacing customs architectures in TV's, displacing MIPS, PPC etc. If Apple builds a very large custom CPU-architecture compatible with ARM ISA for workstations, notebooks etc they will just be in the same position they were with PowerPC and have to compete with the high-performance chips that most can't compete with, even with much larger resources then Apple. Apple and Samsung has no reason in doing so outside handheld devices, low-power servers, consumer oriented routers, streaming media boxes which leaves plenty of room for Intel and all the rest. Plus WiFi and wireless baseband in a huge market in of it self and it doesn't matter what the application processor architecture is. Stuff like ARM has competed because you could replace previous products with it easily, thus taking some of the SoC-market away from other, but that coincides with the choice to do so.
It's the other way around: not talking about Apple using Intel in iPads, but rather Apple ditching Intel in the MacBook Air.
I do agree with Charlie in that there's a lot of pressure within Apple to move more designs away from Intel and to something home grown. I suspect what we'll see is the introduction of new ARM based form factors that might slowly shift revenue away from the traditional Macs rather than something as simple as dropping an Ax SoC in a MacBook Air.
Apple is in the unique position that they could go with either platform way. They are capable of moving iOS to x86 or OS X to ARM on seemingly a whim. Their decision would be dictated not by current and chips arriving in the short term (Haswell and the Cortex A15) but rather long term road maps. Apple would be willing ditch their own CPU design if it brought a clear power, performance and process advantage from what they could do themselves. The reason why Apple manufactured an ARM chip themselves is that they couldn't get the power and performance out of SoC's from other companies.
The message Intel wants to send to Apple is that Haswell (and then Broadwell) can compete in the ultra mobile market. Intel also knows the risk to them if Apple sticks to ARM: Apple is the dominate player in the tablet market and one of the major players in the cell phone market and pretty much the only success in the utlrabook segment. Apple's success is eating away the PC market which is Intel's bread and butter in x86 chip sales. So for the moment Intel is actively promoting Apple's competitors in the ultrabook segment and assist in 10W Ivy Bridge and 10W Haswell tablet designs.
If Intel can't get anyone to beat Apple, they might as well join them over the long run. This would also explain Intel toying with the idea of becoming a foundry. If Intel doesn't get their x86 chips into the iPad/iPhone, they might as well manufacture the ARM chips that do. Apple is also one of the few companies who would be willing to pay a premium for Intel foundry access (and the extra ARM not x86 premium).
So there are four scenarios that could play out in the long term: the status quo of x86 for OS X + ARM for iOS, x86 for both OS X + iOS, ARM for both OS X + iOS and ARM built by Intel for OS X + iOS.
They hardly would want to be in the situation where they have to compete with Intel and Intel's performance again. Also their PC/Mac lineup is just so much smaller then the mobile market they have, why would they create teams of thousands of engineers (which they don't have) to create workstation processors for their mobile workstations and mac pro's? They couldn't really do that with PowerPC design despite having influence on chip architecture, they lost out in the race and just grows more dependent on other external suppliers and those Macs would loose the ability to run Boot camp'd or virtualized Windows. It's not the same x86 as it was in 2006 either.
A switch would turn Macs into toys rather then creative and engineering tools. It would create an disadvantage with all the tools developed for x86 and if they drop high-end they might as well turn themselves into an mobile computing company and port their development tools to Windows. As it's not like they will replace all the client and server systems in the world or even aspire to.
I don't have anything against ARM creping into desktops. But they really has no reason to segment their system into ARM or x86. It's much easier to keep the iOS vs OS X divide.
Haswell will give you ARM or Atom (Z2760) battery life for just some hundred dollars more or so. If they can support the software better those machine will be loaded with software worth thousands of dollar per machine/user any way. Were the weaker machines simply can't run most of that. Casual users can still go with Atom if they want something weaker/cheaper or another ecosystem altogether.
The market is less about performance now as even taking a few steps backward a user has a 'good enough' performance. It is about gaining mobility which is driven by reduction in power consumption.
Would Apple want to compete with Intel's Xeon's line up? No and well, Apple isn't even trying to stay on the cutting edge there (their Mac Pro's are essentially a 3 year old design with moderate processor speed bumps in 2010 and 2012). If Apple was serious about performance here, they'd have a dual LGA 2011 Xeon as their flagship system. The creative and engineering types have been eager for such a system which Apple has effectively told them to look elsewhere for such a workstation.
With regards to virtualization, yes it would be a step backward not to be able to run x86 based VM's but ARM has defined their own VM extensions. So while OS X would lose the ability to host x86 based Windows VM's, their ARM hardware could native run OS X with an iOS guest, an Android guest or a Windows RT guest. There is also brute force emulation to get the job done if need be.
Moving to pure ARM is a valid path for iOS and OS X is a valid path for Apple though it is not their only long term option.
You will not be able to license Windows RT at all as an end-user. Apple has no interest what so ever to support GNU/Linux based ARM-VMs.
I'm sure they will update the Mac Pro the reason behind it is largely thanks to Intel themselves. That's not their only workstation though, and yes performance is important in the mobile (notebook space), performance per watt is really important too. If they want mobile workstations and engineering type machines they won't go with ARM. As it does mean they would have to compete with Intel. They could buy a firm with an x86 license and outdo Intel if they were really capable of that. ISA doesn't really matter here expect when it comes to tools.
"Within 8 years many expect all mainstream computing to move to smartphones, or whatever other ultra portable form factor computing device we're carrying around at that point."
I don't know if I am in a minority or what, but I really don't see myself giving up my desktop anytime soon. I love my mechanical keyboard my large screen and my computing power. So I have to wander if I'm just an edge case or if analyst are reading too much in the rise of the smartphone.
8 years is a loooooong time in this space, and yes you (and most people here) are in the minority.
Notebooks have been outselling desktops for several years, and in 2011 smartphone shipments were higher than all PC form-factors combined. It's pretty clear where the big bucks are going, and it isn't desktop PCs.
We'll just be using large screens, keyboards and mice wireless connected to our ultra portable devices.
The desktop will likely still exist for people like us who frequent this site, however it's role will be far more specialised, possibly more as our personal cloud servers than our PCs.
Wow. Thanks for the excellent article: I really enjoyed it. The thought of having a processor of the power level of Ivy bridge in my mobile phone blows my mind. Honestly though, I really can't see how the volume of CPUs for desktop PCs and servers is going to drop so dramatically, that Intel will need the volume generated by mobile, to "survive". Yes, of course more volume will help, but 8 years from now, even if the mobiles will have such kind of computational power, I would imagine that a Desktop would have 10~20x that performance, as it is today. It's true that today's CPUs are typically more powerful than the average user ever needs, but raise the hand who wouldn't trade his CPU for one 10x faster (in the same power envelope) ... That said, 10W still seems like a lot to fit in a mobile: who knows the power consumption of high-end mobile CPUs today? (quad-core Krait CPU, for example, or even Tegra3)
Intel's real problem is that the power needed for "good enough" computing in a typical desktop CPU came a couple of years ago Nd is rapidly approaching in mobile. With more and more tasks being offloaded to the cloud, battery life is becoming a stronger and stronger focus.
What's sad is that because AMD isn't the major player it once was, Intel has allowed it's eye off the ball, revving Atom with only minor tweaks and having a laissez faire approach to GPU performance. It's only been recently when mobile has started to dominate in the minds of consumers and Intel's lack of any major design wins (the RAZR I doesn't count) which has forced Intel to push as hard as it is now.
"Within 8 years many expect all mainstream computing to move to smartphones, or whatever other ultra portable form factor computing device we're carrying around at that point."
They said the same thing about laptops. Sure, laptops hold about 60-65% of the market these days, but the desktop is still very much around, and is the preferred platform for PC gamers and HTPC applications. They're far more flexible than any mobile form factor.
Smartphones also have the severe disadvantage of a very small screen. Even the largest are too small for most people to deal with. On top of that, actually surfing the net on those tiny screens is an exercise in frustration for many people. I try to tap on a link, only to get the link next to it, or above it, or below it, or possibly having my stupid phone just select the text instead of following the link.
Smartphones have their niche. There's no doubt there, but they are not going to be anyone's mainstream device unless they have needle thin fingers and 20/10 vision.
I agree with the notebook/desktop comparison - these form factors won't go away. I should have said the majority of mainstream client computing goes to smartphones. And solving the display and input problems is easy: wireless display (WiDi/Miracast) and wireless keyboard/mouse (or a dock that does both over wires if you'd rather that).
While not a hardware issue (and thus not an AnandTech major venue), I would be amused if one of your writers explored the implications on data storage design (normal form databases vs. traditional files) of small real estate mobile. My take is that small, consistent bites of bytes are required, and will eventually change how data is stored on the servers. Any takers?
Very well written article. Other sites should read Anandtech to see how it should be done.
Thank you.
All this power saving in idle conditions is great (love the looping of frame buffer idea), but users aren't always reading text on their screens. When these chips are under load they are still going to draw very significant amounts of power. Unless battery technology improves by an order of magnatude I don't see Haswell (or its replacements) fitting into ultraportable devices like phones or "phablets". The other comments concerning AMD are on the mark. AMD is in big trouble. They are too far behind Intel right now and every indication is they will be falling further behind.
Steamroller will haul AMD back towards Intel. Not completely, but a lot closer than they have been, and potentially even ahead in some cases. Still, that process deficit has to be painful, as AMD can only win on idle power.
I really hope GF don't mess up again, as delays really are costing AMD dearly. Steamroller is a good design, the sort that means AMD can have a cheaper but still decent part, but I fear it'll come too late.
Then I sincerely hope AMD can still survive and stride forward in this mobile tide. (R.R. and J.K., you reading this?)
It may look silly but I do like underdogs and their (solid) products, especially when they achieved something with less talents, capital and executiveness.
"To put it in perspective, you'll be able to get something faster than an Ivy Bridge Ultrabook or MacBook Air, in something the size of your smartphone, in fewer than 8 years". I can tell you right now, while this architecture is absolutely great on a motherboard, this isn't the right path to the mobile space.
"Haswell is the first step of a long term solution to the ARM problem." Unfortunately, anandtech is one of the few places left that can call intel on this marketing blather. Intel's ARM problem is that there is no more efficient way to execute instructions than on a in-order, single instruction issue, clean RISC design: all of which are standard features on an ARM. ARM's intel problem is that this limits you to about .5GIPS ([G]meanless indicator of processor speed) compared to over 6GIPS on an all out Intel design.
The choice isn't all or nothing, just that this time Intel choose performance over efficiency. MIPS, alpha, (to a large part) PowerPC all fell to high performance Intel chips that were vastly less complex than current designs. ARM could try to compete with Intel on performance, but if they are lucky they will end up like AMD, and if they can't out design Intel (remember Intel's process advantage) they will end up like MIPS, etc.
The reason this all appears to be built around speed (and not efficiency) can be found on pages 7 and 8 (despite protests listed on those pages). Intel needs to add wider execution paths to try to get a tiny few more instructions out per second, all the while holding even more (than ivy or sandy) instructions in flight in case it can execute one. All this means a much longer path for any instruction and many more things computed, more leaky transistors leaking picoamps, more latches burning nanowatts. All ARM has to do is execute one after another.
I am surprised that they bothered to toot their horn about the GPU. It might beat ARM, but any code that can be made to fit a GPU should be run on an AMD machine (or possibly discrete nVidia board). They have been pushing Intel graphics for at least 15 years, don't pretend they are ever going to get it right.
In conclusion, I want one of these in my desktop. A phone CPU should look much more like an early core (maybe core2) design, maybe even more like a pentium pro.
If we're going to start a RISC/CISC battle, you should really look at a modern ARM architecture before talking.
What you can fit in a phone today isn't going to be what you can fit in a phone 8 years from now (in terms of both TDP and die size).
Getting Haswell-class performance from a 2020 smartphone isn't that far-fetched...you can argue that modern smartphone SoCs are close to the performance of the Athlon 64 2800+ or the Prescott Pentium 4s of 2004 in a lot of tasks.
There is a reason Atom is getting creamed in the phone space by ARM. Also the only way TDP is going to change is with major increases in battery technology. X Joules (typically changed to W/hr in battery speak, but why not stick with SI units) means X seconds a 1 W or X/n seconds at n Watts.
On the high end, everything that won the war for CISC (namely, Intel's manufacturing skills) is even more true than when they won. There isn't going to be another. That doesn't mean that a chip designed for all out performance is going to have any business competing with ARM on MIP/W. If they wanted to compete on battery life, they would have scaled down the depth and breadth of the queue, not increased it.
Actually, I was ready to go into full rant when I saw the opening. Then I checked that "ultrabook" meant 1.8GHz i3s. It is quite possible (although I still doubt it is a good way to use a battery) to build a chip that will do that and have low power. I just don't think that Haskel is anyway designed to be that chip
-- everything that won the war for CISC (namely, Intel's manufacturing skills) is even more true than when they won
It's been true since P4 that the "real" cpu is a RISC engine fronted by a x86 ISA translator. Intel tried to sell a ISA level RISC chip (twice). Not so hot. But Intel does know RISC. I've always wondered why they used all that transistor budget the way they did, rather than doing the entire instruction set in hardware, as they could have. It's as if IBM turned all the 370s into 360/30s.
It was Pentium Pro that switched to a modern out of order micro-ops powered CPU. I.e. P6. It's only the front end that speaks x86. Intel's own RISC designs like i960 ultimately failed and EPIC even more so when it failed to outdo AMD and Intel server processors in enterprise applications. In reality customers only switched to Itanium because they already had made up their mind before there even was any product thus killing at the time more appropriate Alpha, MIPS and PA-RISC processors. But as soon as those where fased out, Intel's x86 compatible chips had already gained the enterprise features that it missed previously and that set those older chips apart.
The front end and x86 decode doesn't use that much space in modern processors at all. CPU architecture aren't really all that important it's today largely about the features it supports, the gpu, video decode/processor etc. ARM just made it into the out-of-order superscalar era in 2011 with A9, superscalar in-order in 2008 with Cortex A8. Atom is kinda designed like a P5 cpu. I.e. superscalar in-order, and moves to an out-of-order design next year. Intel's first superscalar design was in 1988.
ARM just needs to be fast enough, it was fairly easy to replace SH3, Motorola DragonBall, i386 design in the mobile space it was even Intel that did it to a large part. And earlier 8086-stuff had already been left behind by that time. Now what's impressive is the integration and finish of the ARM SoC's. It was Intel that didn't want companies like Research In Motion to continue use low-power Intel x86-chips in their handheld devices. That only changes when Intel sold off the StrongARM/XScale line in 2006. Intel has no reason to start create custom ARM ISA chips again as they can compete with them with x86 chips which they spend much larger time to adapt development tools and frameworks for any way. Atom as a whole has a much larger market then XScale had on it's own. Remember that Intel dropped stuff like RAID/Storage-processors too. Having Intel as a Marvell in ARM chips today wouldn't have changed anything radically.
Also FPU/SIMD has been a large part in later ARM designs and implementations. It's really a big deal as we saw with the chips lacking some of those parts. You shouldn't forget how important those bits are. Others have failed because they didn't take it seriously. That was 15-20 years ago even. Doesn't mean they are yet fighting x86-64 chips in high-end servers and workstation though. We will certainly see them entering that market by 2015 though.
Cortex A9's big IPC improvement came from going out-of-order, which kind of ruins your argument.
Similarly, the X360/PS3 PowerPC chips are strict in order and super ultra slow as a result - at 3.2 GHz they can't match a PowerMac G5 with out-of-order at 2.2 GHz. But I suspect that wasn't the point - Sony and MS can claim the eye-popping (in 2006) 3.2 GHz figure, and the heat production is certainly less than a PPC G5.
Has anyone seen an A9 in the wild? I don't doubt huge IPC improvements (back when O-O-O was new, it tended to double performance). My statement is that it will kill GIPS/W and that Intel can much more easily design a chip that can beat it in both raw performance and GIPS/W (note that your mention of heat production agrees with me).
Also note I suspect that the goal of A9 is to keep the power low enough to keep it out of where Intel wants to go. A rough guess is that ARM might have a chance with dual issue o-o-o, but past that (roughly where Pentium Pro was designed) they can't really go.
The Cortex A9 has been in most major phone/tablet SoCs for the past two or so years. Apple's A5, A5X; Samsung's Exynos 4210, 4212, 4412; TI's OMAP 4 series; Nvidia's Tegra 2 and 3.
Cortex A15 is probably what you were thinking of that we've yet to see out in the wild. It's out-of-order like the A9, but with a great deal of other improvements.
Currently AMD has the upper hand on the notebook segment on battery life. Haswell changes that, but as is always the case with Intel, they will be pricey. And that's why AMD will still have 50% of the market because vendors are cheap.
Power savings are much less relevant on desktop front; I don't care so much about power as i do of heat. AMD X4 700, ship an awsome 4 core cpu for 75$. Technically, it has all that you need from a CPU. Add a Radeon 7770 (again cheap) and your golden. Ya Intel is faster, but both Intel and Nvidia have shitty low end products and that's even more true when you think of atom. 5-15% single threaded performance is not anything that is going to burry AMD lol.
On top of that, AMD has an atom KILLER, a contracts with all major console vendors.
Haswell will have surprisingly little impact on AMD; what I am saying is if you look at your own expectations, you'll realize they were highly inflated and you'll wonder why it didn't do more damage to AMD. I've explained the why. Nevertheless broadwell is a significant threat, and we'll probably see AMD start to lose market share (much more than with haswell) unless AMD can fight back and it will; but nobody knows if it will be enough.
"Overall performance gains should be about 2x for GT3 (presumably with eDRAM) over HD 4000 in a high TDP part."
Does this mean the regular GT3 without eDRAM cache will be twice the performance of the HD4000 and the one with the cache will be 4x? Or that the one with the cache will be 2x? In which case, what would the one with no cache perform like, with so many more EUs the first is probably correct, right?
"presumably with eDRAM"...So the GT3 in Haswel has over double the EUs of Ivy Bridge, but without the cache it doesn't even get to 2x the performance? Seems off to me, doesn't it seem like the GT3 on its own would be 2x the performance while the eDRAM cache would make for another 2x?
It probably means that, like AMD, Intel is hitting the wall on memory bandwidth for IGPs. When it finally arrives, DDR4 will shake things up a bit; but DDR3 just isn't fast enough.
I don't think so, doesn't the HD4000 have more bandwidth to work with than AMDs APUs yet offers worse performance? They still had headroom there. I think it's just for TDP, they limit how much power the GPUs can use since the architecture is oriented at mobile.
We laugh but one possibility is that Intel hopes to sell Haswell's inside US broadcast equipment. There isn't much broadcast equipment sold, but the costs are massive, and there's no obvious reason not to replace much of that custom hardware with intel chips. And much of the existing broadcast hardware (at least the MPEG2-encoding part) is obviously garbage --- the artifacts I see on broadcast TV are bad even for the prime-time networks, and are truly awful for the budget independent operators.
Much like they have written a cell-tower stack to run on i7's to replace the similarly grossly over-priced custom hardware that lives in cell towers, and are currently deploying in China. Anand wrote about this about two weeks ago.
We'll probably see DDR4 in the ARM space before we have it on Intel.
Maybe this should be AMD's focus of attack: if they can't compete on performance, at least try on chipset features.
Perhaps Intel's biggest concern would be if somebody comes along with a super-efficient x86 emulator for ARM. Going forward, "legacy applications" is going to be an increasingly important selling point to prevent ARM inroads on the low end.
Microsoft keeping their Windows ARM version locked-down is a key to that too, and likely a deference to their relationship with Intel. But Apple is less likely to similarly constrain themselves.
>We'll probably see DDR4 in the ARM space before we have it on Intel.
>Maybe this should be AMD's focus of attack: if they can't compete on performance, at least try on chipset features.
The problem with DDR4 is likely going to be the price. We all know how the memory industry likes to jack up the prices whenever a new spec comes out. Remember how expensive DDr3 was when it started to replace DDR2?
Some people joke that this transition is the only time they make any money in the RAM business, and considering the low prices of DDR3 you have to wonder.
DDR4 might offer some performance and power advantage on release, but it will likely be more expensive and take time (12-18 months?) to offer a compelling performance / $ advantage over cheap DDR3 variants.
If AMD is trying to position itself as 'value' brand, chaining themselves to DDR4 (before Intel's volume brings down the prices for everyone) could spell their doom.
Intel is set to launch Ivy Bridge EX on a new socket late in 2013 on a new socket. The on-die controller will likely use memory buffering similar to what Nehalem-EX and Westmere-EX use. The buffer chips may initially use DDR3 but this would allow for a trivial migration to DDR4 since the on-die controller doesn't communicate directly with the memory chips.
Come to think of it, Intel could migration Nehalem-EX/Westmere-EX to DDR4 with a chipset upgrade. Vendors like HP put the buffer chips and memory slots on a daughter card so only that part would need replacement.
I suspect that 95W is the rated socket limit. This is similar to how Intel advertises Ivy Bridge at 77 W on the desktop but tells motherboard manufacturers to build around the higher 95 W figure.
What is odd is that Haswell will move some of the VRM circuitry on the package which should restrict just how far off that 95W figure motherboards can deviate.
Felt so good to read a 'proper' Anandtech article after so long, instead of the usual Apple worship and cheap fillers.
Haswell is looking very good. Would make an ideal upgrade for Sandy Bridge users. AMD is done, but thankfully Intel sees some threat from ARM so that will keep them innovating.
I hope Intel make a sensible choice with Haswell SKUs and get away from their artifical crippling and segmentation tendencies. That's about the only thing that can ruin Haswell.
Once again they bump up the number of transistors being used on their worthless video-and this time they even lower CPU performance (L3 cache) to appease their worthless video.
Interesting article, but I guess I misunderstood previous articles...I thought Conroe through Ivy Bridge had 4 integer execution units per core? (As does Piledriver?)
Good article and information that you need win 8 to fully utilize Haswell was new information to me. It will be interesting to see how much better Haswell will be with win 8 compared to win 7. Seems to be same kind of dilemma as with AMD Bulldoser/piledriver where there seems to be some kind of better performance with new OS, but how much will reamain to be seen.
Apple doesn't have any fabs though and if Samsung isn't willing to re-sign another contract, they're going to be in a bit of a bind. In other words, it won't be cheap. And even if Samsung does re-up, you can be sure that it'll come with an additional $1.05b price tag to offset any "losses" in their mobile division.
I felt the first page overestimated Apple's influence quite a bit. They have ~5% desktop marketshare and 0% in the server space. Not to trivialize any loss in CPU sales, but Intel's primary headwinds don't involve a possible Apple switch to ARM.
Apple's influence comes from the mobile market which is beginning to dwarf the PC market (and is larger than the server market in terms of volume). Apple is the largest tablet maker and a major smart phone manufacturer. There hardware is backed by one of the largest digital media markets. To do this Apple is the worlds largest consumer of flash memory whom orders are large enough to directly affect NAND pricing.
With the rest of the industry going ultra mobile, they'll have to compete with Apple who is already entrenched. Sure the PC will survive but mainly for legacy work and applications. Their isn't enough of a PC market in the future to be viable long term with so many players.
While all this is true, the first page seems to indicate that Intel is really pushing the low power envelop partly because of rumors that Apple will move away from Intel chips in their laptop / ultrabook products.
While I'm sure Intel is happy to be in MBAs, etc., losing that business isn't going to be as big a deal as the other pressures facing the PC market (as you mention).
Now if WinRT on ultrabooks / laptops began to take off... that would be a huge problem for Intel.
Losing just the MacBook AIr isn't going to hurt Intel much as a whole but it is doubtful that Apple would just move that product line to ARM. The rest of the line up would likely follow. The results by the numbers would hurt Intel but nothing to doom the company. Intel does have the rest of the PC industry to fall back upon... except the PC market is shrinking.
Apple is one of Intel's best gateway into the ultra mobile market. Apple has made indications that they want to merge iOS and OS X over the long term which would likely result in dropping either ARM or x86 hardware to simplify the line up.
WinRT is also a threat to Intel but WinRT has next to zero market share. The threat here is any success it obtains. Apple on the other hand controls ~75% of the tablet market last I checked.
Andriod is a bit neutral to Intel as manufacturers can transition between ARM and x86 versions with relative ease. Intel will just have to offer competitive hardware at competitive prices here. The sub 10W Haswell parts are going to be competitive but price is a great unknown. The ARM SoC's are far cheaper than what Intel has traditionally been comfortable with. So even if Intel were to acquire all of the Android tablet market, it would be a minority at this time and over the short term (even in the best case scenario, it'd take time for Android based tablets to surpass the iPad in terms of market share).
So ultimately it would be best for Intel to snag Apple's support due to their dominant market share in the tablet space and influential position in the smart phone space.
Great article. Great depth, great info and very thorough. Hats off :)
But I couldn't shake the feeling that I was missing perhaps the most important bit of information: price.
Obviously, Intel isn't going to give that away 9 months away from the presumed launch date -- though in typical fashion we'll see it leaked early. It still is the biggest question regarding Haswell's, and in turn Intel's, success against ARM.
I think most consumers are already at that good enough stage, where your Tegra 3 or Snapdragon S4 can fulfill all of their computing needs on a tablet or a phone. The biggest drawback for productivity purposes isn't necessarily the "lack of CPU performance" but rather the lack of a proper keyboard/mouse, gaming, along with a rare application or two that's still locked to x86 (Office rings a bell, though not for long). Or I should say, these were drawbacks. Not any longer.
So is Intel going to cut their margins and go for volume? Or are they just going to keep their massive margins and price themselves out of contention? Apple carries with itself a brand name that people want. It's become more than a gadget but a fashion accessory. People don't mind paying for Apple tax. I don't think I ever will, but at least I can notice the trend. The Intel brand doesn't carry with it the same cult following and neither does x86. Unless Intel is willing to compete with ARM on price, lowering the cost of their products below Apple's, I don't think think the substantial increases in efficiency and performance will matter all that much.
"Sandy Bridge made ports 2 & 3 equal class citizens, with both capable of being used for load or store address calculation. In the past you could only do loads on port 2 and store addresses on port 3. Sandy Bridge's flexibility did a lot for load heavy code, which is quite common. Haswell's dedicated store address port should help in mixed workloads with lots of loads and stores."
The rule of thumb numbers are, on "ordinary" integer type code: 1/6 instructions are branches 1/6 are writes 2/6 are reads 2/6 are ALU
This makes it more obvious why Intel moved as it did. You want to sustain as close to 4ops/cycle as you can. This means that your order of adding abilities should be exactly as Intel has done - first two ALUs - next two read/writes per cycle (ideal would be a mix of load/store) but Intel gave us that you can do a load+store per cycle
- next two loads per cycle
- next make sure the branches aren't throttled (because back-to-back branches are common, and you want branches resolved ASAP) - next make the load-store system wide enough to sustain a MAC per cycle (two loads+store)
It's hard to see what is left to complain about at this level. And of course we have better lock performance. So what's left?
What I think still have substantial room for improvement (correct me if I'm wrong) is (a) TLB coverage (b) TLB efficiency.
TLB coverage could be improved with a 2nd level TLB but (as far as I know) Intel doesn't go in for that, unlike POWER. By TLB efficiency, I mean not needing to lose performance due to different address spaces. Unfortunately Intel seems screwed here. The POWER segment scheme (especially the 64-bit scheme) is REALLY powerful here in allowing multiple address spaces to coexist, so that multiple shared libraries, the main app code, IO, and memory mapped files, can all have persistent simultaneous TLB entries. (Note that this has nothing to do with the Intel segment scheme --- different technology, to solve a different problem.)
As far as I know, right now all Intel has is a single ASID representing a process. Better than no ASID, and having to flush the TLB on every context switch; but not especially good at sharing entries --- so (again as far as I know) shared libraries or shared mem-mapped files being used by multiple processes, even when they are mapped to the same address, have to have separate TLB entries, each one with a different ASID corresponding to the process calling them.
Stupid me. I should have read the entire article. So we do have a (nicely sized 2nd level TLB).
I guess my only remaining complaint now is that ASIDs are too coarse a tool. In principle you could get dove some of the problems I mention using dedicated large pages for some particular purposes (eg to over the OS code and data, the equivalent of the frame buffer for modern windowing systems, and some pool of common shared libraries). Does anyone know the extent to which both Windows and OSX actually make use of dedicated large pages in this way?
Great article Anand, but when will Anand cloning be incorporated in CPU designs so we can all have one of you at home to pull out and extract information from @ will ? ?
Although, with that said, I was already made aware of much of this recently from listening in to some random guys babbling about tech stuff on a podcast ;)
Anand, you write the best tech articles on the web. As a graduate student in computer engineering, I appreciate the practical yet technical analyses you write on the industry. Keep it up!
I like the concept of Panel Self Refresh, yet I feel that Intel could implement this themselves. I'm not an expert, but couldn't a buffer be placed on the CPU package between the GPU and panel? This may not be as efficient as if the panel makers did it themselves and it would probably only work when using the IGP (when it would most likely have the greatest impact), but at least it is a step in the right direction.
Additionally, Great Article! Anandtech provides some of the most thorough technology articles. Keep it up.
" If all mainstream client computing moves to smartphones,..........."
Seriously? The idea of all mainstream computing done on nothing but smartphones seems to stretch the imagination just a bit much. There isn't even the most basic of businesses that do not have a computer (made with mainstream components as are most small and medium sized businesses) and business software. Don't forget the PC gamers and people who like larger viewing and typing surfaces. Or the fact that in eight years, home and business PC's will be blindingly fast with larger displays with much greater pixel density, possibly clear screen touch surfaces, likely alternative interfaces than just a keyboard and mouse and incredible computing and rendering power.
The likelihood of the general populace turning all their computing needs over to a palm size PC I see as kind of weird fantasy where people learn to love minute typing interfaces and squinting at hi density displays fit into 3.5by 4.5 inches for long periods of the day without interruption. No, to push the idea of micro computing one must discount all of the other advances in the computer/electronics industries in order to make their pet theory viable.
"The race to the bottom that we've seen in the LCD space made it unlikely that any of the panel vendors would be jumping at the opportunity to make their products more expensive."
It's unfortunate, because of what might have been had the manufacturers, of which there are only three main ones, if I recall, had the foresight to market to customers that weren't just looking to buy the lowest priced panel on display at Best Buy. Had they the initiative to have started years ago, there would be some pretty fantastic panels available today for much more reasonable prices than seen for the 27 and 30 inch 2560X1600 panels today.
This doesn't really belong in the Haswell article, but I would love to know more about the physics and constraints of TDP. Like, hit me with a chart of TDP impact for a variety of important parts in phones, tablets, laptops, and desktops. Show me a chart of TDP budgets and mitigation strategies. Explain to me roughly how physics forces those things to relate. Please.
Seems important and it's easy to understand the comparison from Ivy Bridge to Haswell but that doesn't feel like the big picture.
like atom, you're stuck in no mans land. way too high for tablets and phones, but in desktops and laptop, who cares if the amd solution uses 30 watts instead of 8? that difference isn't enough to matter when you take the whole platform into account, especially at lower price points where battery life wont be fantastic anyway. on the dsktop it's completely pointless.
On a laptop using 30 watts instead of 8 will more than triple your battery life, especially at lower price points/smaller form factors where manufacturers gimp the battery.
How's about browsing for 9 hours instead of 3? Or 27 hours instead of 9? I'd jump on it in a heartbeat.
Haswell will sport 32 single precision or 16 double precision flops per cycle per core for its desktop and high tdp mobile skews [at least 30 watt and up].
Can anyone speculate on how many single precision and double precision flops per cycle per core Haswell will execute for its low TDP skews? For example the less than 10 watt skews? the 15 watt skews?
I would also be interested in learning speculation about how many execution units (or shader cores if you prefer standard nomenclature) the low TDP Haswell products will have.
Haswell will be able to execute 16 double precision or 32 single precision flops per clock per core for desktop and high TDP mobile skews [at least 30 watts and up].
Can anyone speculate on how many flops per cycle per core the sub 10 watt and 15 watt Haswell skews will execute? Similarly I would be interested in hearing speculation about how many graphic execution units (shader cores) the sub 10 watt and 15 watt Haswell products will come with. Any speculation on graphics clock speed?
Is it possible that the high end tock 22 nm Xeon server parts could have 32 double precision or 64 single precision flops per clock per core?
Interestingly, this might be the first chance in forever AMD has at competing with Intel. If Haswell's sole goal is to hit lower power targets, and Piledriver hits its 15% and Steamroller its 15% over that, AMD is suddenly right up with Intel's i5 series with its GPU-less chips, and upper i3-range with their APUs, which is absolutely perfect positioning: most i5 purchases are for people planning to pair with discrete graphics, while most i3 series seem to go to the PC buyer looking for low price tags.
The one downside is that the i7 series is Intel's money-maker: the clueless people who think they're getting maximum performance but are really just feeding the binning system and buying an unbalanced PC.
u got it wrong bro, Intels money maker is not i7, it's i3 and i5(low end and a bit of mainstreem)
as for Haswell, on paper it looks too good to be true as Ivy did last year and ended up everything but impressive.
Since Intel conroe core(2006) there actually were not any significant improvements worth mentioning.There's not much extra what todays CPUs can do and Pentium4 could not a decade ago.
I would love to see some innovations user could really benefit from(something like reattachable,thin, light, portable, firm solar panel hooked at the back of screen or even build in as last layer into screen itself) and not that crap Intel/AMD gives us year by year.
Anand is very right, it's everything about power savings which in effect makes smaller and more portable form factors possible!
As for mainstream perfomance, my Linux workstation still uses a Q9450 rev. C1 from 2008 clocked at 3.2GHz and a SSD of course. That box feels in every way as snappy as my Windows-box with Sandy Bridge at 4.8GHz. Which means, I really didn't need more performance than what C2Q already gave. Of course the SB-box benchmarks much faster, about twice as fast in most things, but the point is for myself I really don't need that perfromance except for some occasional game.
But I could use a smaller, cooler running device instead!
Something I would like to see is a decent comparison between Intel's and AMD's plans. Many might be able to outline the basics, but a thorough article on the subject should be rather enlightening... Comparing their design philosophies, architectures, possible pitfalls and successes etc, pretty much what's been done with this article only with both companies. I know it might be time consuming but I imagine it could be quite a nice read.
agreed; it's difficult to find the common ground with so many different chip architectures. x86 is a big enough competition but now it's getting split wide open with ARM and BIG/litle etc etc so it's always helpful to have either more charts or real world examples lol.
My take from this article though: Haswell still won't have the prowess to beat the GT650. I have GTX660 in my laptop w/ Optimus (TM). It works. Runs a game on HD4000 at 17 FPS. On the GTX660 I get 100+ fps, and am able to use higher anti-aliasing settings. So, clearly a 100% improvement over Ivy bridge is only putting the chip into "mediocre" category by the time its released.
"The bigger concern is whether or not the OEMs and ISVs will do their best to really take advantage of what Haswell offers. I know one will, but will the rest?"
I am curious who is that one OME that will do their best to really take advantage of Haswell offers?
Apple. Or are you joking. I personally hate Apple and have since the original iMac but their engineering is top notch when it comes to getting ideal performance from silicon to user. So.. guessing that's the reference.
A fine read, technically very comprehensive, but still overly melodramatic.
While it is true that it is crucial for Intel to step a foot in the byod market some things still hold true: - In value and profit the PC processor market is much bigger than the byod processor market and will stay so for years because PCs, especially business PCs won't disappear anytime soon. - Nobody can touch Intel in this market, it has been proved for decades. Not AMD at the height of its success, not mighty IBM, not Sun, nobody. - Contrary to what you say Intel has a definitive production advantage and there are very few fabs able to compete. Note that Apple is incapable of producing processors, it is dependent on external manufacturers. - What Apple does with its processor is interesting business wise for its iPods/Pads/Phones, but Apple doesn't have the research power Intel and others have in the chip space and I can't see how it will innovate better than Intel and other competitors. - Intel is aware of its shortcomings, is pushing tremendously in the right direction. A competitor that doesn't rest on its laurels is a mighty threat, ARM beware. - If Apple stops using Intel processors, it will of course wipe a few hundred millions of Intel's turnover but won't be anything remotely dangerous for Intel - It remains to be seen that Apple users will accept yet another platform change. - It remains to be seen that it would make sense business-wise for Apple - I am quite sure many phone companies will be open about renewed chip competition and not letting a single platform become too powerful.
All in all it seems to me Intel is as dangerous as ever, executing very well in its core business and heading towards great things in the phone/pad space.
Congratulations, an intel cpu engineer wrote around 27 Dec 2012:
"... Anandtech's latest Haswell preview is also excellent; missing some key puzzle pieces to complete the picture and answer some open questions or correct some details but otherwise great. ..."
Is there going to be a replacement (37W) for the current IVB 35W quad-core part? Quite a few designs are now dependable on this, lower power quad-core option - Sony S-series and Razer Blade, to name a few.
When can we expect all mobile CPUs (except maybe for the extreme series) to fall into the 10W-20W range? In three years' time and 10nm?
The decision to not include GT3 with desktop parts is very disappointing. A 35/45W low-voltage part with GT3 would make for an excellent HTPC build, among other things. Is there a chance Intel change their mind and start shipping GT3 desktop parts at some point?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
245 Comments
Back to Article
CaptainDoug - Friday, October 5, 2012 - link
Quite the read. Very informational. Anandtech has some of the best tech writers. True online journalism. Sometimes i miss that while reading tech blogs... You guys are a cut above.. at least one.colonelclaw - Friday, October 5, 2012 - link
Couldn't agree more, this article really brightened up what was otherwise a pretty miserable afternoon here in London.When am I going to be able to walk into a shop and buy something with Haswell inside it? Next March maybe?
Kepe - Friday, October 5, 2012 - link
As stated in the article, Haswell is coming in the summer of 2013.linuxlowdown - Saturday, October 6, 2012 - link
Tag team Intel fanboy puke.Azethoth - Sunday, October 7, 2012 - link
How do I downvote stupid crap like this "Tag team Intel fanboy puke." comment so that collectively we can see high quality comments without having to wade through the interturds as well? It really takes away from the best article I have read in a long time. Not because it is about Intel, but because it is about the state of the art.medi01 - Tuesday, October 9, 2012 - link
Well, I'd also ask how do I downvote stupid butt kissing like OP, while we are at rating....Kisper - Saturday, October 20, 2012 - link
Many people enjoy well written and informative articles. Are you telling me that if you wrote, you would not enjoy positive feedback from your readers?CaptainDoug - Tuesday, October 23, 2012 - link
Exactly.actionjksn - Sunday, October 7, 2012 - link
Why are you even on this article dumb fuck? I'm sure there is something that is of interest to you on the internet somewhere.medi01 - Tuesday, October 9, 2012 - link
Not sure about him, but I've looked into this article to figure power targets for haswell (especially interesting to compare to ARM crowd), NOT to read orgasmic comments about eternal wizdom of Intel's engineering...Astarael - Monday, October 15, 2012 - link
Then get out of the comments section.Old_Fogie_Late_Bloomer - Tuesday, October 9, 2012 - link
I finally made it through this article...hell, I took a course in orgnization and architecture earlier this year and I didn't come close to understanding everything written here.Still, it was a great read. Thanks for going to the trouble, Anand. :-)
IKeelU - Friday, October 5, 2012 - link
What's great is that Anand's been doing this for 15 years, has hired new editors along the way, and the quality hasn't wavered. I'm glad they haven't polluted their front page with shallow tech blogging like other sites I once enjoyed.I can't imagine this hobby without this site. I got into PC building just as it came online and have depended on it ever since.
TheJian - Monday, October 8, 2012 - link
I disagree. Ryan Smith's 660TI article had some ridiculous conclusions and went on and on about a bandwidth issue that isn't an issue at 1920x1200. As evidenced by the fact that in their own tests it beat the 7950B in 6 games by OVER 20% but lost in one game by less than 10 at 1920x1200. Read the comments section where I reduced his arguments to rubble. He went on about a dumb Korean monitor you'd have to EBAY to get (or amazon from a guy with ONE review, no phone, no faq page, no domain, and a gmail account for help...LOL), and runs in 2560x1440. If his conclusions were based on 1920x1200 like he said (which he repeated to me in the comments yet touts some "enthusiast 2560x1440" korean monitor as an excuse for his conclusions), he would have been forced to say the truth which was as his benchmarks showed and hardocp stated. It wipes the floor with the 7950B, just as the 680 does with the 7970ghz (yea, even in MSAA 8x) where they also proved only 1 in 4 games was even above 30fps...@2560x1600 with high AA which is why its pointless to draw conclusions based on 2560x1600 as Ryan did. Heck 2 of the 4 games at hardocp's high AA article didn't even reach above 20fps (15 & 17, and if bandwidth is an issue how come the 660TI won anyway?...LOL)Ryan was reduced to being a fool when I was done with him, and then Jarred W. came in and insinuated I was a Ahole & uninformed...ROFL. I used all of his own data from the 660TI & 7970B & 7970ghz edition articles (all by Ryan!) to point out how ridiculous his conclusions were. When a card loses 6 out of 7 games, you leave out Starcraft 2 (which you used for 2 previous articles 1 & 2 months before, then again IMMEDIATELY after) which would have shown it beating even the 7970ghz edition (as all the nv cards beat it in that game, hence he left it out), you claim some Korean Ebay'd monitor as a reason for your asinine conclusions (clear bias to me), in the 6 games it loses by an avg of 20% or more at the ONLY res 68 24in monitors on newegg use (or below, most 1920x1080, not even 1920x1200, only <2% in steampowered.com hardware survey have above 1920x1200 and most with dual cards in that case), you've clearly WAVERED in your QUALITY since Anand took up mac's/phones.
I'm all for trying to save AMD (quit lowering your prices idiots, maybe you'll make some money), but stooping to dumb conclusions when all of your own evidence points in the exact opposite direction is really shady. Worse it was BOTH editors, as Ryan gave up (the evidence was voluminous, he wisely ran and hid) Jarred stepped in to personally attack me instead of the data...ROFLMAO. You know you've lost when you say nothing about my numbers at all, and resort to personal attacks. Ryan nor Jarred are dumb. They should have just admitted the article was full of bias or just changed the conclusion and moved on. With all the evidence I pointed out I wouldn't have wanted it to be in print any longer. It's embarrassing if you read the comments section after the article. You go back and realize what they did and wonder what the heck Ryan was thinking. He said that same crap in his next article. Either he loves AMD, gets money/hardware or something or maybe he just isn't as smart as I thought :)
Anand's last hardware article on haswell said it would be a "MONSTER" but it's graphics won't catch AMD's integrated gpu and we only get 5-15% on the cpu side for a TOCK release. 2x gpu doesn't mean much with it being 9 months away and won't even catch AMD if they sit still. OUCH. So basically much ado about nothing on the desktop side, with a hope they can do something with it in mobile below 10w (only a tablet even then). I was pondering waiting for the "MONSTER" but now I know I'll just buy an Ivy at black friday...ROFL. What monster? In this article he says Broadwell is now the "monster"...heh. Bah...At least I got to read this before black friday. I would have been ticked had I read this after it hoping for the desktop monster. Since AMD now sucks on the cpu side we get speed bin bumps for microarchitecure TOCK's instead of 25-40% like the old days. I pray AMD stops the price war with NV and starts taking profits soon.
If it wasn't for their advantage on the integrated gpu, they'd be bankrupt already and they will be there by xmas 2014 at the current burn of 650mil/year losses (they only have 1.5Bil in the bank and billions in debt compared to 3.5B cash for NV and no debt, never mind giving up the race to Intel who dwarfs NV by 10x on all fronts). AMD's only choice will be to further reduce their stock value by dilution of shares (AGAIN!) which will finally put them out to pasture. Hopefully someone will pick up their IP, put a few billion in it and compete again with Intel (samsung, ibm, NV if amd stock drops to $1 by then, even they could do it). Otherwise, my next card/cpu upgrade after black friday will cost $1000 each as NV/INTC suck us all dry. There stock is already WAY down in credit rating (B+ last I checked, FAR from NV AAA), and they are listed as 50% chance of bankruptcy vs. all their competitors at 1% chance (intc, qcom, nvda, samsung etc). The idea they'll take over mobile is far fetched at best. I see nowhere but down for their share price. That sucks. I hate apple, but at this point I wouldn't even mind if they picked them up and ran with AMD's cpu mantle. We might start getting ivy 3770's (or the next king) at prices less than $329 then! The first sale I've seen was $309 in my email from newegg this weekend and that sucks in 7 months. No speed upgrades, no price drops, just the same thing for 7 months with no pressure from a competitive-less AMD. Their gpu sucks compared to 660ti (hotter, noisy, less perf), so no black friday discount. You either go AMD for worse but savings or pay through the nose for NV. Same with Intel and the cpu. In that respect I guess I get Ryan trying to save them...ROFL. But prolonging the inevitable isn't helping, I'd rather have them go belly up now and someone buy the cpu and run with it before it's so far behind Intel they can't fix it no matter who buys the IP. I digress...
Spunjji - Thursday, October 18, 2012 - link
God that was painful to even attempt to read. :/ Comparing AMD vs. nVidia to AMD vs. Intel is foolish in the extreme (there's a rather significant difference in the cost/performance balance, where AMD and nVidia are actually competitors) so I feel justified in not reading most of that screed.ananduser - Friday, October 5, 2012 - link
Yes...Anand's quite the loss for the PC crowd. He's reviewing macs nowadays.A5 - Friday, October 5, 2012 - link
If you owned a site and could delegate reviews you don't find interesting (oooh boy, another 15-pound overpriced gaming laptop!), wouldn't you do the same thing?Kepe - Friday, October 5, 2012 - link
Mmh, I've also noticed how Anand seems to have become quite an Apple fan. Don't get me wrong, I love his reviews, and Anandtech as a whole. But the fact that Anand always keeps talking about Apple is an eyesore to me. Particularly annoying in this article was how he mentioned "iPad form factor" as if it was the only tablet out there. Why not say "tablet form factor" instead? Would have been a lot more neutral. Also it seemed to confuse someone in to thinking Apple might be putting Haswell in to a new iPad.meloz - Friday, October 5, 2012 - link
Agreed. The Apple devotion has gone too far and the editorial balanced has been lost. The podcasts -in particular- are basically an advertising campaign for Apple and a thinly disguised excuse for Anand & Friends to praise everything Apple. So I do not listen to them.The articles though -like this one about Haswell- are still worth reading. You still get as much gratuitous Apple references as Anand can throw in but there is also plenty of substance for everyone else.
ravisurdhar - Friday, October 5, 2012 - link
It's not "devotion", it's simply an accurate description of the market. How many iPads are out there? 100 million. One tenth of a BILLION. One for every 70 people on the planet. Well over half of Fortune 500 companies use them. Hospitals use them. Pilots use them. Name one other tablet that comes close to that sort of market penetration. When Apple decides to make their own silicon for their devices, it's a big, big deal.For the record, I don't have one. I just understand the significance of the 800 pound gorilla.
Kepe - Friday, October 5, 2012 - link
Let's see. I think we can agree that the Samsung Galaxy S III was the most important Android phone launch of the summer, so it should get comparable treatment if Anandtech was completely neutral. Let's compare the articles about the SGS III vs. iPhone 5Doing a search on anandtech.com gives us 8 articles/news posts about the SGS III vs. 13 articles/news posts about the iPhone 5.
SGS 3:
Five news stories about product announcements
Performance Preview article
Preview article
Review article
iPhone 5:
Why iPhone 5 isn't launched in 2011 article
Analyzing rumours about iPhone 5 article
New SoC in iPhone 5 article
iPhone 5 Live Blog from the product launch seremony
Three news stories about new features and product announcements
iPhone 5 Hands On article
Lack of simultaneous voice and LTE/EVDO article
Analyzing Geekbench results article
Sunspider Performance Analysis article
Performance Preview article
iPhone 5 Display Thoroughly Analyzed article
+ The upcoming iPhone 5 Review article
+ articles such as "iOS6 Maps Thoroughly Investigated"
Look at the difference. It's quite clear which device gets more coverage. And it's the same thing for older iPhones. Articles such as "Camping out for the new iPhone 3GS".
This is NOT equal treatment of all products. This is why my trust for Anandtech has started to slip. Yes, Anandtech still is the best place for reviews, but one really has to wonder if those reviews still are as neutral and objective as they used to be.
vFunct - Saturday, October 6, 2012 - link
It's an android device. Android devices do not matter. Everyone uses iPhones anyways, They are better. Apple makes better products, including laptops.No need to waste space on Android.
Haugenshero - Saturday, October 6, 2012 - link
Please take your pointless apple fanboy drivel to another site that doesn't care about actual hardware and software and just like shiny things.cjl - Saturday, October 6, 2012 - link
Apple's (iOS) current sales are only 20% of the overall smartphone market share, while Android is over 60%, so if either one of the two is largely irrelevant, it's apple.HisDivineOrder - Sunday, October 7, 2012 - link
Shhhh, icks-nay on the facts-nay.You might cause a fanboy's head to explode near one of those inconveniently placed explosive barrels we walk by in real life.
Just imagine a chain reaction. Caused by an Apple fan's mind being blown. You might take out an entire city block.
Do you want that kind of devastation on your karma? Think different. ;)
vFunct - Sunday, October 7, 2012 - link
The Android market is the cheap giveaways.No one willingly pays money for an Android phone.
Not everyone can afford the premium quality of an Apple product. They will have to settle for an inferior Android devices instead until they can afford higher quality products.
Kepe - Monday, October 8, 2012 - link
Nice trolling there. Now go back under the bridge and stay there =)Old_Fogie_Late_Bloomer - Tuesday, October 9, 2012 - link
"Not everyone can afford the premium quality of an Apple product. They will have to settle for an inferior Android devices instead until they can afford higher quality products."Ha ha ha! Well done, if you're screwing around.
But seriously, if you actually believe that, seek psychiatric help. :-P
Spunjji - Thursday, October 18, 2012 - link
Hahahahahahahahahahahahahahahahahahahahahasolipsism - Tuesday, October 9, 2012 - link
And how many of the 20% is on one phone? Let me know when you figure out how to cover every single Androd-based device that hits the market in a given year.Spunjji - Thursday, October 18, 2012 - link
Fuckwit.nirmalv - Sunday, October 7, 2012 - link
Anandtech being a hardware site,its more inclined to keenly flow hardware devices with new architecture and innovations. iphone brings in1, A new A7 chip design and a novel 3 core graphics core
2, A new 3 microphone parabolic sound receiving design(which likely will become the new standard)
3, A new sim tray design(which will also likely become the new standard)
4, New sony BSI stacked sensor (the 13 mpx version will likely be the rage next year).
5, The first time that we have a 32 nm LTE chip which will give all day usage.
6, New thinner screen with incorporated touch panel and 100 % RGB
I am not sure about samsung but can anyone enlighten me about S3's technical achievements?
nirmalv - Sunday, October 7, 2012 - link
Sorry make that a 28 nm LTE basebandcenthar - Sunday, October 7, 2012 - link
99.998% of iPhone users just don't care about that. Really they don't.Geeks like me who do, are too damn smart to sell our souls to the such a god damned, locked down and closed system to even bother to care.
Magik_Breezy - Sunday, October 14, 2012 - link
2nd thatSpunjji - Thursday, October 18, 2012 - link
3rdCaptainDoug - Tuesday, October 23, 2012 - link
4th,solipsism - Tuesday, October 9, 2012 - link
Of course a company that releases one device per product category per year as well as one with the greatest mindshare is going to have more articles.But what happens when you add up all Samsung phones against all Apple phones in a given year?
What happens when you don't count the small blogs that only detail a small aspect of a secretive product but count the total words to get a better feel for the effort spent per company's market segment?
I bet you'll find that AT spends a lot more time covering Samsung's phones than Apple's.
Spunjji - Thursday, October 18, 2012 - link
This. I generally trust their editorial, but the focus on Apple prevails. One just has to read accordingly.Kepe - Friday, October 5, 2012 - link
Also look at any other Apple product review. They are all ridiculously in-depth with analysis about almost every single component in the product. Macbook Pro with Retina Display got 18 pages, the 3rd gen iPad got 21 pages. Don't get me wrong, I like a proper review with everything analyzed, but it's only the Apple products that get these huge reviews. But compared to those massive Apple reviews, it's like all other products are just glanced over in a hurry. The new Razer Blade got 9 pages. Asus Transformer Pad Infinity got 8 pages.Peanutsrevenge - Friday, October 5, 2012 - link
What the hell are you guys bitching about?Of course the iPhone articles are going to be longer and more numerous than GS3 articles.
iPhone releases come with new iOS releases and have their own eco-system.
Android phone releases use a common OS across them and therefore much of what's in one article doesn't need repeating in another.
Anand liking Apple is not our problem, I can see why people like them (not so much Anand) and that's fine, personally I dislike them (hate was originally typed, but was edited due to being incorrect), but still respect them and respect people who purchase their products (and pay for their litigation).
An entire page of comments talking about how Anand isn't allowed to like or talk about Apple products because you guys don't like is ridiculous, they're a PC company and should exist on a PC website.
Grow up.
Kepe - Friday, October 5, 2012 - link
Sure, but I'm talking about dedicating entire, long articles to such things as the iPhone display or why it doesn't have a certain feature and so on. The SGS III has a very interesting display, too. Still it didn't get nearly as much attention. Of course Anand is allowed to talk about Apple products. What I want, though, is Anand(tech) to be as thorough in reviewing other products, too, or then stop making those huge articles only about Apple products. Because that is biased.In the Macbook Pro Retina article Anand talked about the cooling system and the fan blades for one page. When I read any other laptop review on Anandtech, cooling is briefly described in a sentence or two.
Dedicating so much attention to just one company's products makes it look like Anandtech is biased. And that is not good.
Magik_Breezy - Sunday, October 14, 2012 - link
Hopefully because of these comments they'll finally see what we want, not some Apple crap. Good engineering stupid managementSpunjji - Thursday, October 18, 2012 - link
Nailed it.vFunct - Saturday, October 6, 2012 - link
Android products would get more coverage if they bothered to do any engineering on them. Since they don't push the technology the way Apple does, they don't need a more in-depth review.StevoLincolnite - Saturday, October 6, 2012 - link
You're kidding right? Hardware wise Apple has always been behind the curve compared to the competition in every facet of it's product line-ups or very quickly beaten.lmcd - Saturday, October 6, 2012 - link
Umm, I would disagree there. Apple has always been ahead of the curve in GPUs and this is the FIRST TIME SINCE BEFORE THE A-SERIES that Apple has had a GPU without an overwhelming lead on the competition for more than half a year.*While GPU selection isn't always huge, it's one of the biggest points of differentiation in mobile chips, along with power use.
*excluding the A4 if you count from when it was first in a phone as opposed to in a tablet.
Magik_Breezy - Sunday, October 14, 2012 - link
The last time I played a game on my phone was about 8 months ago and I'm 15! To say that Apple pushes their hardware is naive as it gets.The Galaxy S III was the best purchase Ive made, even my mum doesn't like my iPhone 4.
vFunct - Sunday, October 7, 2012 - link
Yes, Apple products are always ahead of compromised Android products.Android devices are badly engineered, like incorporating LTE when the battery can't handle it, for example. Apple doesn't compromise on their design.
Kepe - Monday, October 8, 2012 - link
How much does Apple pay you for a comment praising them?Magik_Breezy - Sunday, October 14, 2012 - link
Probably real customer support without paying an extra $200Spunjji - Thursday, October 18, 2012 - link
Yawn.Spunjji - Thursday, October 18, 2012 - link
The bit that aggravates me the most is that even with this lavishing of review pages, the actual comparison of Apple products to competitors tends to lack (particularly with the Macbook article). This is understandable under some circumstances (iPhone battery life - new test, small selection of data points) but not for others.Arbee - Friday, October 5, 2012 - link
I'm not really seeing any of that. AT's Android and Windows Phone reviews are just as in-depth and complementary where due as their Apple ones. AFAIK both Anand's and Brian's daily-driver phones aren't iPhones, even. They care about the tech, not who it comes from. It just happens that Apple is often the original source of new and interesting things in that space. At this exact moment they're the only people shipping something new and interesting. When the Nokia 920 launches, I'm confident Anand and Brian will be ready with a 15+ page review and discussion of anything novel on the podcast, and when Winter CES brings us Tegra 4 and other Android news, I expect to see eye-glazing levels of detail here at AT.(As an aside, I smiled at how closely DPReview's discussion of the alleged "purple haze" problem tracked Brian's rant on the podcast - clearly both writers know what they're talking about, which can be a rare quantity in tech journalism).
VivekGowri - Saturday, October 6, 2012 - link
I think Anand's daily driver is an iPhone, but he frequently carries the latest Android/WP device on the side. Brian and myself end up daily driving like a half dozen phones a month, depending on what shows up at our doorstep.Zink - Saturday, October 6, 2012 - link
"iPad 3 form factor" was used because all of the other tablets have 25Wh batteries and draw about 5W max. The A5X iPad and it's giant 42.5Wh battery on the other hand can put out over 10W of heat which is the power envelope where Intel might target a Haswell SOC.amdwilliam1985 - Monday, October 8, 2012 - link
I totally agree with you on the Apple part. That's the biggest pullback on reading Anand writings. Too much Apple praising.I used to be an Apple fan, but recently they're becoming the biggest jerks in the technology industry. The human/ethical part of in me hates them so much, that I won't buy anything that has a Apple logo on it.
I gave away my iPad 2, switched to Samsung Galaxy S phones, and using my HP windows 7 laptop over the 2011 MBA.
-say NO to bully, say NO to Apple.
xaml - Thursday, May 23, 2013 - link
Number of problems solved with this approach: NO.dartox - Tuesday, November 27, 2012 - link
Probably because most people know about how large an iPad is - if he said "tablet" form factor that's ambigious.. and if he said "Motorola XOOM" form factor not as many people are familiar with the size.Paer0 - Friday, October 5, 2012 - link
Yes... Macs are well engineered and deliver a solid performance across board.stop-a - Saturday, October 6, 2012 - link
100% agree on the well engineered part especially on the antenna gate when Steve God was saying "you're holding wrong", plus the recently ingeniously designed sapphire glass lens camera when Tim Schmok was saying "stay away from bright light source". Boy, Apple products must be engineered straight from the heaven; they are just too perfect for a mere earthling to use.Paer0 - Saturday, October 6, 2012 - link
@stop-a. Since you are a 100% Apple hater, let me ask you this what computer do you use? And what OS do you use on it? I hope it doesn't crash several times a day. I use a MacBook Pro 2012 and I don't see anything come close.Urizane - Saturday, October 6, 2012 - link
You really shouldn't use the 'crash several times a day' piece anymore. I'm annoyed every time I see this. My Windows 7 machine has an uptime of 20 days and counting. Most of the time, it waits for me to connect to it via SplashTop or FTP, or it's recording TV shows, but when I play games, I stress it bigtime. Seriously, stop with the Windows constantly crashes crap. It's just plain false now.P.S. - 20 days ago, I brought it to another house, thus the interruption in uptime.
StevoLincolnite - Saturday, October 6, 2012 - link
I have a Dual socket 2011 motherboard with dual Core i7 3930K's both chips clocked to 4.6ghz, 32gb of ram, Triple Radeon 7970 3gb cards powering 3x 27" Dell U2711 monitors in Eyefinity.Kay go. Lets see if your Mac can keep up or a Mac workstation at the same price. (Hint: Not going to happen.)
Besides, Mac's look ugly, I prefer the whole she-bang of a side window with a nice water cooling loop and having the whole thing light up, not some dull silver box.
Plus, my system is completely stable. Never had a crash yet with Windows 7 and... I have access to the last couple of decades worth of software and games, not to mention emulation of other platforms.
I can also pretty much find software and hardware easily and it will "just work" I never have to ask the question of: Will it work on a Mac?
lmcd - Saturday, October 6, 2012 - link
I don't think you're in their target audience, for some reason. They're the best preconfigured system out there, especially once you ignore price.Magik_Breezy - Sunday, October 14, 2012 - link
You'd hope a manufacturer can "configure" a system for an extra $1,400Hardware: $400
I'm Apple: $1,400
Total: $1,800
With PCs manufacturers almost always loose money selling motherboards
vt1hun - Tuesday, October 9, 2012 - link
Two Core i7s working on a dual 2011 socket motherboard? You need QPI links for that which only certain Xeons have. Sounds like your system will just NOT work !FunBunny2 - Tuesday, October 9, 2012 - link
If Steve hadn't done what Apple does best (according to Steve) "steal" BSD unix, would you still be crowing?Magik_Breezy - Sunday, October 14, 2012 - link
His operating system doesn't crash 7 times a day because he doesn't run OS X, I'll rephrase that, because he isn't a retardHisDivineOrder - Sunday, October 7, 2012 - link
Let's not forget the obscenely high failure rates due of Macbook Pro's because they are huge, metallic, and yet refuse to have vents ruin the smooth awesomeness of their aesthetic.Whoops, for many it won't last more than two years, if that. Hell, if you're lucky, your battery will give out before your laptop cooks. Regardless, look up what Apple suggests and you'll get:
Buy another one. Yours is old. ;)
Magik_Breezy - Sunday, October 14, 2012 - link
Anything delivers "solid performance" on Facebook & iWorkWhy pay $2,000 for that?
random2 - Friday, October 5, 2012 - link
I agree. admittedly I am not an apple fan and view them as people who have undergone a degree of brainwashing compounded by the need for some to keep up with the Jone's. A certain degree of mind control must be necessary to stick with a company that has had some questionable business practices as far as customer relations, dealing with product issues and denying said issues, not to mention the whole hypocritical stance by apple in regards to copyright infringement has also left a bad taste in my mouth.hasseb64 - Saturday, October 6, 2012 - link
Disagree, not that much new from already published IDF reports almost 1 month ago. What is intresting is the claimed 40 EU GT3, other sources say lower amounts.JKflipflop98 - Saturday, October 6, 2012 - link
I totally agree. It's articles like this that have kept me coming back for years. Keep up the good work Anand!tipoo - Sunday, October 7, 2012 - link
"You can expect CPU performance to increase by around 5 - 15% at the same clock speed as Ivy Bridge. "That seems terribly disappointing for a tock, even IVB as a Tick managed 10% in most cases.
medi01 - Tuesday, October 9, 2012 - link
One can't be biased !@# !@#@ and a good journalist at the same time.One needs to be blind not to see how glass is always half empty for AMD, and half full for nVidia/Intel. F**!@#'s were shameless enough, to test 45W APU with 1000W PSU and such crap is all over the place.
Paulman - Friday, October 5, 2012 - link
As I was reading this article, about part way into the low platform power sections I suddenly had this thought: "Oh man, AMD is gonna die...!"I don't know if that's true for the entire microprocessor side of AMD, since they look like they're already starting to transition out of the desktop space, but I don't know if they're going to stand much of a chance if they're planning on entering the same TDP range as Haswell.
Do you think there's a chance AMD will start focussing on designing ARM ISA cores? Or will expanding on their x86 Bobcat-type cores be enough for them?
sean.crees - Friday, October 5, 2012 - link
I also worry about AMD. AMD has been 1-2 steps behind Intel for a while now, and now it seems Intel is at least 1 or 2 steps behind ARM and the future. Is that going to mean AMD is just too far behind to stay relevant now? If nothing else, i suppose AMD can fall back on graphic cards with it's ATI acquisition.Da W - Friday, October 5, 2012 - link
If Haswell keeps x86 relevant in the tablet space and thus Windows 8 has the upper edge over Windows RT and Windows tablets can grab +-50% market share from the iPad, then it can be good for AMD, provided they survive that long.RedemptionAD - Friday, October 5, 2012 - link
If AMD can create a team to focus on increasing IPC with a goal to one up Intel and have the ATI graphics people keep doing what they do with a time goal of say 2 years, (Note: Portables/Notebooks/Desktops should all be x64 by then), then I think that AMD will be able to return to their Athlon 64 glory days or better.Da W - Friday, October 5, 2012 - link
AMD spend 1/10th of Intel in R&D. There are things they just cant do, i suspect pursuing higher x86 single trend performance is one of them.StevoLincolnite - Saturday, October 6, 2012 - link
However, allot of the R&D Intel spends is on lithography type technologies, AMD doesn't have to spend Billions on such things anymore.Besides, a simple way for AMD to beat Intel when Intel is a node ahead is to throw more transistors at the problem which they have succeeded very well at doing in the past.
Mind you, that comes at the cost of power and die size, however with stuff like clock mesh it can negate some of that.
Kevin G - Friday, October 5, 2012 - link
Being four steps behind ARM isn't necessarily a bad thing unless you're trying to leap frog them. AMD appears to be content with letting Intel spearhead the effort to get into the ultramobile market. With Intel two steps behind of ARM and they couldn't leap frog over ARM, there is little chance that AMD would be able to do the same. It isn't just knowing what battles to fight but also when to fight them.abufrejoval - Friday, October 5, 2012 - link
It was only when I was reading Jana Rutkowska's notes on the current UI limitations within Qubes, that I finally understood (I believe!) the message which AMD has been pushing for quite a few years now: GPU compute will truly be an integral part of their future APUs in one or two generations, becoming almost an augmented instruction set instead of just a SoC.Currently all Qubes "user" applications, that is everything except the Dom0, can't use the GPU to render their graphics: It's basically software rendering into an off-screen composition buffer and then GPU assisted composition of these software buffers onto the visible screen (this time with all the wobble and transition effects we've all come to expect and love ;-)
That's because although the GPU is on the same die even on the newest Trinity class APUs, it's still logically very separate, sharing only some stuff but bypassing, I believe, the ordinary page tables (not the IOMMU ones) and the snooping logic for caches. So even if GPU and CPU sit on the same die and use the same phyiscal DRAM bus, doing GPU compute implies using a dedicated part of that RAM in a way, which doesn't mesh seamlessly with CPU compute.
But the roadmap seems to imply, that this limitation will go away, which would allow e.g. Qubes to use GPU assisted rendering anywhere in user space memory and thus also into a per DomU virtual framebuffer composed of quite ordinary paged virtual memory, which could then be assembled by the Dom0 for the visible screen or for video encoding and streaming to a remote display device e.g. for cloud gaming.
This easy feeding of GPU "results" into another software layer is currently either impossible or requires major fiddling with device drivers so it's limited to the GPU vendors and bilateral deals such as nVidia and Splashtop. Once the GPU becomes more of an augmented instruction set, allowing OpenCL or even hardware primitives on ordinary user space paged virtual memory, this becomes as natural as running virtual machines with hardware virtualization.
And at that point even the new 256bit FMA may look pretty lame compared to what hundreds of APU EUs could do. That to me explains rather well, why AMD isn't spending more transistors on a vastly improved CPU only x86 ISA: It truly belives it's a dead end for both personal and scientific workloads.
It's a very daring bet and I very much admire them for having the vision and the balls to tie the company's survival to it. Over the last 40 years Intel seems to have failed with most of its visions (80432, i860, Itanium), but excelled on evolving x86. AMD, however, seems better on vision and noticably 2nd rate on execution.
APUs are potentially quite dangerous both to nVidia and to Intel, because both can't easily duplicate them: The AMD/Intel cross licensing deal IMHO won't cover the GPU portion. Unless nVidia and Intel join, which would only happen if either of the two is in truly dire straights.
But quite a few things need to fall in place over the next couple of years and AMD needs to survive them for that potential to develop. And it looks like all ther other players aren't standing still.
Events like Apple potentially using Samsung augmented cash billions to turn TMSC into a private provider of 1x nm ARM SoCs are sending shock waves into the market, which may force "strange" alliances.
These days when even trival things like "swipe to unlock" can be patented and used to bloodlet competitors I'm surprised to see IBM and Intel use things like transactional memory, which saw silocon first with Sun's Rock, I believe, or Intel turning to eDRAM for caches and frame buffers, which IBM's been implementing first on the p-Series.
That leads me to an open question on the commercial workloads, which is almost the only domain, where I have difficulties seeing the immediate benefit of APUs, at least after Oracle's grab on Java and their expressed intent to make commercial workloads a SPARC exclusive (please see Larry's opening remarks on Openworld 2012): How can AMD make APUs the better Java and database engines? How can they make search, big data, map reduced or JavaScript run better on APUs?
I can only guess that having managed CPU+GPU AMD would be in a better position to add xPU for all of the above.
ltcommanderdata - Friday, October 5, 2012 - link
A great, detailed description of Haswell's architecture. I do have some questions though.You mentioned that Intel will be including up to 1 redundant EU in the GPU array. Does that mean only GT3 will have the 1 redundant EU (41 total, 40 usable) with GT2 having no redundancy? Or is it 1 redundant EU per sub-slice, so GT2 will have 1 and GT3 will have 2?
Will the embedded DRAM be implemented PoP like in SoC? When you say we'll see a version of Haswell with embedded DRAM do will all GT3 have embedded DRAM or will only some GT3 have embedded DRAM (kind of a GT4)?
Given the long timescales of CPU design, there would be overlap between the Haifa team working on Sandy Bridge/Ivy Bridge (particularly Ivy Bridge) and the Hillsboro team working on Haswell. I was wondering if you knew how much opportunity there is for learning between consecutive designs in terms of magnitude of changes possible and timescales before things are pretty much fixed? I'm in no position to judge, but I was also wondering based on your knowledge of the architectures and/or interactions with members of the design teams if you sense any distinct difference in design philosophies between the Haifa and Hillsboro teams. Afterall, the Haifa team's background was in power-efficient, mobile-oriented designs whereas Hillsboro was high-performance, desktop/server oriented. You mentioned in the article that Haswell goes back to Nehalem's 3 clock domains due to lessons learned from Sandy Bridge/Ivy Bridge. While I don't doubt that's the primary reason, I wonder if design philosophy played a role too since Nehalem and Haswell are both Hillsboro designs and maybe they like 3 clock domains.
Anand Lal Shimpi - Friday, October 5, 2012 - link
Unfortunately that's all the info I have on redundancy in the GPU array. I think we'll have to wait until we're closer to launch to know more. The same goes for the nature of the on-package memory.I wondered the same thing about the correlation between design teams and decisions in Nehalem/Haswell, I refrained from speculating on it in the article because I didn't necessarily see any reason to doing so, but I definitely noticed the same correlation. It could just be a coincidence though. Nothing else beyond the L3 cache frequency really stood out to me as being an obvious common thread between Nehalem and Haswell though.
Take care,
Anand
ltcommanderdata - Friday, October 5, 2012 - link
Thanks again for your insights.tipoo - Friday, October 5, 2012 - link
Speaking of the EUs, is the GT3 part twice as fast as the HD4000 with or without the eDRAM cache? The article seems to imply with, but then what is the performance without it if they've doubled the EUs? Doesn't it seem more likely they doubled performance without the cache, and the cache doubles it beyond that?telephone - Friday, October 5, 2012 - link
Anand, thanks for the insights. We all enjoyed it very much and look forward to getting the real thing into your labs.To clarify some questions:
As for the design team philosophy, the Hillsboro design team continually tries to outdo the Haifa design team and vice versa. Both teams have access to the other teams' design collateral, as we co-own the tick-tock model.
Next, the reasons for the "3" clock domains are too complicated (and confidential) to go into. Since designing for "2" clock domains is much simpler, the reason is not that we enjoy pain and misery. Suffice to say, that you are missing a very big piece of the puzzle and accurate conclusions as to why this was done cannot be drawn from the information you have. And the number of clock domains is in quotes because those are not accurate anyhow.
Sincerely,
Someone from the Hillsboro Design Team
Stahn Aileron - Friday, October 5, 2012 - link
I'm curious as to whether Intel has enough interest to drive the Atom design low enough to hit ARM power level (like Medfield) and integrate an Atom core into a Core CPU design. nVidia introduced a heterogeneous CPU in their Tegra 3 SoC. (Two different ARM core types in the CPU block). From all the stuff I've seen about Intel over the past half decade, I'm pretty sure they have the resources to pull that off. They have top-notch designers and engineers with the basic tech and designs need to start R&D on that, I think.On the other hand, if they really are trying to force a Core design in Atom territory... Well, hell ya ^_~ Still, I can't really see Core hitting the sub-1W power levels they've been able to do with Atom (Medfield). I figure using an Atom core for basic S0ix functions would be a little more power efficient than using a Core design, but I'm no silicon engineer. Intel would know about that far better than me.
jigglywiggly - Friday, October 5, 2012 - link
wish the onboard gpu was better =/woula been nice for a laptop
tipoo - Friday, October 5, 2012 - link
2x the HD4000 is pretty decent for integrated. I wonder if that's 2x with or without the eDRAM cache though.ElvenLemming - Friday, October 5, 2012 - link
It's been known for a while that Haswell was only going to have a moderate improvement in the iGPU and the next big overhaul would be coming with Broadwell.csroc - Friday, October 5, 2012 - link
This is impressive, it might convince me it's time for a new laptop. On the other hand I also need to build a new desktop workstation and Haswell so far hasn't impressed me in that space.mayankleoboy1 - Friday, October 5, 2012 - link
Is Intel sacrificing Desktop CPU performance to make an architecture that is geared to the mobile space ?csroc - Friday, October 5, 2012 - link
It feels that way to me. Mobile performance seems to be their big concern now, that and improving the GPU. Two things I generally can't be bothered to care about when I'm looking to build a new workstation. I suspect I'll build an Ivy Bridge system because I could use it now and see nothing worth getting excited about.dishayu - Friday, October 5, 2012 - link
I fully share your sentiment. TO be very crude, i don't mind at all, paying for power imporvements, because it will pay back for itself in the long term (by consuming less power AND needing lesser cooling). But i DO mind very much, paying for 40 EUs of GPU on my desktop build which i will not use even for a second. Me, you and many others do not care about on-die graphics and Intel should realize that.I don't know why intel can't offer us both GPU and GPU-less options, the way they did with motherboards back in the days? P965 had no graphics, G965 did. Pretty sure it's technologically not an issue.
DanNeely - Friday, October 5, 2012 - link
If it makes you feel any better; reports elsewhere are that GT3 will be mobile only, because desktops don't have the power/size constraints driving the need for premium IGPs.Intel's not IGP CPUs are the E series parts; unfortunately they've failed to execute on the enthusiast side in terms of price/launch date leaving them as mostly server parts.
There just aren't enough of us to justify Intel adding another die design for their mass market socket that doesn't have an IGP at all instead of just letting us turn it off and use the extra TDP headroom for more time at boost speeds.
Omoronovo - Friday, October 5, 2012 - link
I'm somewhat in disagreement with you both.Whilst I share a concern that Intel is no longer focusing on raw performance improvements in the purely desktop space, they are still delivering incremental updates to the architecture that will benefit all current software (even if only marginally). However, processor performance has been reaching more and more diminishing returns in recent years, namely that software is simply not able to take advantage of multiple cores and improved performance because of (primarily) locks and complexity in creating multi-threaded applications.
As such, Intel has been focusing on that area - to make it easier for software and software developers to take advantage of the performance that exists *now* rather than brute forcing the issue by simply delivering more raw performance (much of which will be wasted/remain idle due to current software constraints).
With this, Intel has been able to focus on keeping performance high whilst subsequently dropping power usage substantially - the fact the iGPU is oftentimes not being used in a desktop environment does not invalidate it's utility - QuickSync is a prime example of where the gpu can accelerate certain types of processing, and if more software takes advantage of this we should see even more gains in future.
For the last 6 years or so, Intel has shown that it knows what demands will be placed on future computing hardware, and they seem convinced that this is the way to go. We might not be there yet, but technologies like C++AMP, OpenCL and such make me hopeful that this will change in a few years.
cmrx64 - Friday, October 5, 2012 - link
I solved this problem by buying an Ivy Bridge Xeon (specifically, an E3-1230v2). No GPU, lower power consumption than the equivalent i5/i7, has hyperthreading, performs really good, and a lot cheaper than an i7.If you don't care about the GPU, look to the Xeon line.
dishayu - Friday, October 5, 2012 - link
Woah! I did not even think of that. That is VERY compelling but i can't do without unlocked multiplier, so there is no perfect processor for me still :(StevoLincolnite - Friday, October 5, 2012 - link
Or just go with a Socket 2011 Core i7 3930K like I have and do a little bit of undervolting and has no IGP's.I think the reason why the Desktop space has seen decreasing/stagnant sales is simply because allot of people see no need to upgrade.
A Core 2 Quad Q6600 @ 3.6ghz, with a decent chunk of Ram and a decent graphics card is actually fairly capable of running almost every game at maximum settings.
Heck I know people who are perfectly happy sitting with a Pentium 4 for basic web use.
I think a change needs to happen where software catches up with hardware to give people a reason to upgrade and drive sales which might reinvigorate Intel and AMD to innovate.
Windows 8 and the next generation consoles might actually help in that regard.
De_Com - Friday, October 5, 2012 - link
Well said Steve. Couldn't agree with you more.
I'm running a Core 2 Extreme QX6850 at 3.4ghz, 1066Mhz DDR2 Ram and a GTX295 and it still rocks all the newest games at or close to max settings.
Will have this system 4 years this November.(all except the GTX295, which was upgraded from a 9800 GX2), even now I'm thinking that was a waste of cash.
I've gone to upgrade at least twice each year, but can't justify it.
The only place I'd see returns is in the power costs, but hey, whats a few extra cents.....
The system meets my needs, and forking out for a similar system today would cost around the €1800 mark.
Until the software can better utilize the components I'm holding out until Summer 2013, that'll be over 4 years I've gotten out of this system. Up until 2008 I slavishly upgraded every year or 2.
lukarak - Saturday, October 6, 2012 - link
This (late) December, i will have had my i7 for 4 years, and i have not seen a single reason to upgrade. The GPU is 2.5 years old (GTX480, was 280 before that).A x58 motherboard has 6 memory slots, and now houses 24 GB of ram for virtual machines, which can go 48 GB for a reasonable price.
I just don't see the need to do anything more, and this will probably fail from old age before i would need a drastically faster machine.
xaml - Thursday, May 23, 2013 - link
"but hey, whats a few extra cents....."Sure, it's probably not your generation to take the hit, having to deal with the consequences of energy excesses.
DanNeely - Friday, October 5, 2012 - link
Is that actually an IGPless chip, or just a standard LGA1155 quadcore chip with a disabled IGP.csroc - Friday, October 5, 2012 - link
I don't mind power savings, the few times my system is idle it could certainly benefit but overall it would mean reduced consumption even under load. My system just doesn't spend enough time in idle with my Q9450.Ultimately it does seem as though the software demand for faster CPU hardware has slowed and between that and the lack of real competition, so has the development.
If it weren't for the fact that I need more RAM or wanted faster photo processing (and may start doing some video) I'd probably keep what I've got a bit longer. My Q9450 hasn't held me back from playing any games yet. The 20% OC I've been running doesn't hurt but ultimately a lot of things just aren't CPU limited anymore.
Kidster3001 - Monday, October 15, 2012 - link
If you're playing 3D games then your CPU is likely "idle" 50%-75% of the time. Idle time does not just mean when the display is off.IanCutress - Friday, October 5, 2012 - link
You may think this as a result of all the low power talk, but Haswell is doing something rather important on the peak performance side. The increase in the size of the execution engine is important - adding in another integer ALU and another load/store means that in workloads that share INT and FPU performance (think loop counters which store an INT for loop iteration then perform some FP calcs) will improve. By increasing the bandwidth available and being able to keep the two FPU fed with info means a greater throughput as long as the bandwidth and thread switching can hide any additional L3 latency. Personally I'm thinking this may be a subtle move towards more threads per core in future architectures. Some of the non x86 are abusing 8 threads/core with improvement gains, so I wonder if that would be possible here. Ideally we would like every port on the execution engine to do everything, with a single pipeline feeding it and excellent branch prediction to help with single thread speed. Smaller nodes help with that silicon real estate, or someone will stumble on a better/smaller way to actually physically create these things.Ian
DanNeely - Friday, October 5, 2012 - link
I'm curious what IBM/Oracle's high SMT designs look like on the execution port side. As long as it's business as usual I doubt Intel will ever make all the ports do everything because it would just be hogging a huge amount of die area when the odds of each thread doing all of the same instruction type constantly are very low. Smaller bursts of one type can be spread out using OOOE.TeXWiller - Friday, October 5, 2012 - link
Perhaps they also try to reach lower usable clock frequencies through performance upgrades and this way gain some additional voltage scaling, or what is left of it.vegemeister - Saturday, October 6, 2012 - link
>think loop counters which store an INT for loop iteration then perform some FP calcsIf updating the loop counter us taking a substantial fraction of the CPU time, doesn't that mean the compiler should have unrolled more?
Anand Lal Shimpi - Friday, October 5, 2012 - link
The high end desktop space was abandoned quite a while ago. The LGA-2011/Extreme platform remains as a way to somewhat address the market, but I think in reality many of those users simply shifted their sights downward with regards to TDPs. A good friend of mine actually opted for an S-series Ivy Bridge part when building his gaming mini-ITX PC because he wanted a cooler running system in addition to great performance.To specifically answer your question though - the common thread since Conroe/Merom was this belief that designing for power efficiency actually means designing for performance. All architectures since Merom have really been mobile focused, with versions built for the desktop. I like to think that desktop performance has continued to progress at a reasonable rate despite that, pretty much for the reason I just outlined.
Take care,
Anand
csroc - Friday, October 5, 2012 - link
Sandy Bridge E just seems to price itself out of being reasonable for a lot of people. The boards in particular are rather steep as well.dishayu - Friday, October 5, 2012 - link
Well, LGA2011 is bit of a halo product with no real substance. An ivy bridge 3770K will stand up to a quad core LGA2011 part nicely, not to mention it supports PCI e gen3, so even though it has lesser lanes, it doesn't have a bandwidth disadvantage. Moreover LGA2011 is still stuck at sandy based architecture, so that again isn't quite on the bleeding edge and as far as i understand, Haswell will come out before IB-E does, so it's 2 full cycles behind.Kevin G - Friday, October 5, 2012 - link
For a single discrete GPU, Ivy Bridge would be able to match the bandwidth of Sandy Bridge-E: a single 16 lane PCI-E 3.0 connection. Things get interesting when you scale the number of GPU's. There is a small but clear advantage to Sandy Bridge-E in a four GPU configuration. Ivy Bridge having fewer lanes does make a difference in such high end scenarios.For its target market (mobile, low end desktop), Ivy Bridge is 'good enough'.
vegemeister - Saturday, October 6, 2012 - link
Quad core LGA2011 is kind of a waste though. If you're already paying extra for the socket, my philosophy is go hexcore and 8 DIMMs or go home.Peanutsrevenge - Friday, October 5, 2012 - link
Given that desktop software's not really been pushing for better CPU performance, the direction intel has taken is not a bad one IMO either.It's now possible to build a mighty gaming rig in an mITX case (Bit Fenix Prodigy), think 3770K and GTX 690 gfx and watercooled.
A rig like that will likely last 3 years before settings have to be tweaked to keep 60+ fps.
What's really needed is for software to take advantage of GPUs more, (which would play into AMDs hands), but I fear many of the best coders have switched from windows to Android/iOS development, With windows 8 shipping shortly, that number will increase further.
j_newbie - Saturday, October 6, 2012 - link
I think that is quite sad.I for one always need more FLOPS, MCAD work and simulation work depends on two things memory bandwidth+size and flops, surprisingly AMD still offers a better vfm deal in this space thanks to avx instructions not being widely adopted into most FEA/CFD code yet and the additional ram slots you get with cheaper boards.
Server components are always overpriced as we dont need a system to last very long.
my 3930k setup is about 1.5 times faster than the x6 setup at 3 times the cost... :(
Peanutsrevenge - Saturday, October 6, 2012 - link
You're talking more of a workstation than a desktop. Hence my use of the word 'desktop'.tim851 - Friday, October 5, 2012 - link
This is a perfect demonstration of the power of competition.With AMD struggling badly, Intel was content in pushing Atom. They didn't want to innovate in that sector, they sold 10 year old technology with horribly outdated chipsets. Yes, they were relatively cheap, but I was appalled.
Step in ARM, suddenly becoming a viable competitor. Now Intel moves its fat ass and tries to actually build something worthwhile.
Sadly, free markets are an illusion. Intel should pay dearly for the Atom fiasco, but they won't. Just as they didn't pay for the Pentium 4 debacle. They will come 5 years late to the party, but with all their might, they will crush ARM. ARM will fall behind, they can't keep up with that viscious tick-tock-cycle. Who can?
In 8 years, ARM will have been bought by some company, perhaps Apple. ARM will then no longer be a competitor, it will be just a different architecture, like X86. I don't see Apple having any long-term interest in designing their own hardware, it's way too unsexy. They will just cross-licence ARM with Intel and in 10 years time, Intel will rule supremely again.
UpSpin - Friday, October 5, 2012 - link
You forget that Intel vs. ARM is something bigger than AMD vs. Intel.Behind ARM stand Qualcomm, Samsung, Apple, ...
All new software is written for ARM, not Intel (x86) any longer. Microsoft releases a rewritten ARM Windows RT with a rewritten Office for ARM. Android runs on ARM and everyone supports the ARM version, while only Intel has to keep it compatible with x86.
Haswell will get released, when exactly? In a year, ARM A15 in maybe two months. Haswell has nice power savings, but it's still a Ultrabook design. The current Atom SoCs are much worse than current A9/Krait SoCs. Intel heavily optimized the software to make it look not that bad (excellent Sunspider results), but they are.
If Windows 8 is a success, Intel can be lucky. If it's not, what many expect, Intel has a real problem.
Intel is a single company building and developing their CPU/SoC. ARM SoCs get build and developed by a magnitude of companies.
If Apple can design their own ARM based SoC which has the same performance as a Haswell CPU (which is easy in the GPU area (the iPad has a faster GPU than the Intel CPUs most probably already, and with A15 and Apples A6 it's possible to get as fast with the CPU, too), they will be able to move Mac OS to ARM. This allows them to build a very very power efficient, lightweight, silent MacBook. They can port apps from iOS to MacOS and vice versa. Because they designed their SoC in-house, they don't have to fear competition the near term.
Apple always wants a monopoly, so it doesn't make sense for them to cross-license anything.
tuxRoller - Friday, October 5, 2012 - link
Unless your app is doing some serious math you can get by with just using a cross platform key chain.Frankly, the hard part is targeting the different apis that are, currently, predominating on each arch. However, assuming those don't change , and the form factor doesn't either, your new app should just be a compile away.
Kidster3001 - Monday, October 15, 2012 - link
Current ATOM SOC's are not "much worse" than A9/Krait. Most CPU benchmarks running in native code will favor the Intel SoC. It's the addition of Android/Dalvik that leans the favor back to ARM. Android has been on ARM for a lot longer and is more optimized for ARM code. Android needs to be tweaked more yet to run optimally on x86.Kidster3001 - Monday, October 15, 2012 - link
" with A15 and Apples A6 it's possible to get as fast with the CPU, too"say what? A15 and A6 are a full order of magnitude slower than Haswell. omg
Dalamar6 - Sunday, May 12, 2013 - link
Nearly all of the software on Android is junk.Apple blocks everything at a whim and gives no control.
I don't know about Windows RT, but I suspect it will suffer the same manner of crap programs Android does if it's not already.
Even if people are more focused on developing for ARM, the ARM OSes are still way behind in program availability(especially quality). And it's downright sad seeing people charging money for simple, poorly coded programs that can't even compare to existing open source x86 software.
jacobdrj - Friday, October 5, 2012 - link
I agree competition is good/great. However, how you categorize Atom is just not true! Atom filled a very real niche. Cheap mobile computing. Not powerful, but x86 and fast enough to do basic tasks. I loved my Atom netbook and used it until it bit the dust last week. Would I have liked more power? Sure, but not at the expense of (at the time) battery life. Besides, once I maxed it out by putting in a SSD and 2 GB RAM, my netbook often outpaced many peoples' newer more powerful Core based laptops for basic tasks like word processing and web browsing.Just because power users were unhappy does not mean Atom was a 'fiasco'. Those old chipsets allowed Atom netbooks to regularly sell, fully functional, for under $200, a price point that Tablets of similar capability are only just starting to hit almost 4 years later...
Don't bash Atom just because you don't fit into it's niche and don't blame Intel for HP trying to oversell Atom to the wrong customers...
Peanutsrevenge - Friday, October 5, 2012 - link
If competition is 'good/great' what does that make cooperation?Imagine the possibility of Intel and AMD working together along with Qualcomm, Imagination etc.....
Zeitgeist Movement.
Kidster3001 - Monday, October 15, 2012 - link
Intel is not going this way because "ARM stepped in". Intel is going this way because it decided to go play in ARMs playground.krumme - Friday, October 5, 2012 - link
My Samsung 9 series x3c (ivy bridge), have a usage looking on this page with wifi at bt on ranging from 4.9W to 9.9W from lowest to higest screen brightness, with a normal usage of screen of 7.2W with good brightness (using samsung own measuring tool).So screen is by far the most important component on a modern machine. In the complete ecosystem i wonder if it matter how efficient Haswell is. The benefit of 10W tdp for say the same performance is nice, but does it really matter for the market effect. And the idle power is already plenty low.
I doubt Haswell will have an significant impact - as nice as it is. This is just to late and way to expensive for the mass market. Those days are over.
At the time it hits market dirt cheap TSMC 28nm A15 and bobcat successor hits the market for next to nothing, and will give 99% of the consumers the same benefits.
kukreknecmi - Friday, October 5, 2012 - link
I hope i know it right. L3 on SB/IB doest used by GPU. L3 still servers as cache on system via memory controller. If GPU nneds to acess to memory, it sends request to memory controller. L3 is not directly accessable to GPU as a texture cache etc.On IB, they added a 512k cache which is seperated to half, 256k of it is used as texture system as backfeeding and other 256k half is used for other things.Article implies that L3 cache on IB is used as a texture buffer like on ordinary graphic cards. Only on Haswell L3 cache will be accessable and can be used as a some kind of GPU specific buffer.
Kevin G - Friday, October 5, 2012 - link
The confusing thing is that consumer Ivy Bridge parts have a L3 cache just for the GPU which is separate memory than the L3 cache that the CPU uses. The Ivy Bridge GPU's can use the CPU's L3 cache as the GPU's L4 cache to a degree.To confuse things further, the CPU side really has four levels of cache too. There is the small 1.5 KB micro-uop cache for instructions which comes before the 32 KB L1 instruction cache.
mayankleoboy1 - Friday, October 5, 2012 - link
From the article, its not very clear : Which platform (DT, Mobile, ultra mobile) will have the integrated voltage regulators/controllers ?Ryan Smith - Friday, October 5, 2012 - link
Ultra Mobile.Anand Lal Shimpi - Friday, October 5, 2012 - link
It's not clear how much of the VR circuitry gets integrated into Haswell or necessarily which parts will have it and which ones won't. Ultra mobile is a shoe in, but I've even heard of desktop parts getting it as well. We'll have to wait and see.DanNeely - Friday, October 5, 2012 - link
Rats. Reading the article I was hoping that Intel had decided to only bake the VRMs into their ultra-mobile parts. Better VRMs are an important factor in high end OCing; with desktop boards not cramped for space I really hope Intel keeps them off the package.Peanutsrevenge - Friday, October 5, 2012 - link
Seconded.However, I wonder whether the VRMs on high end mobos will still be an option, where the on package VRMs will simply extend the capabilities?
But given Intels recent distaste for overclocking, it wouldn't suprise me if we'll soon see CPUs completely locked from overclocking completely or only on E series, high profit chips.
Homeles - Saturday, October 6, 2012 - link
"However, I wonder whether the VRMs on high end mobos will still be an option, where the on package VRMs will simply extend the capabilities?"Bingo.
Homeles - Saturday, October 6, 2012 - link
Low end motherboards won't need them. High end overclocking boards will have them in addition to the ones on package.tuxRoller - Friday, October 5, 2012 - link
Using lvds reclocking you can reduce idle screen induced wakeups to 30 (ditto for the memory controller if the cpu supports self refresh for the sram ).eDP may allow even less.
dishayu - Friday, October 5, 2012 - link
I derived immense pleasure reading the article. Thank you, Anand. Big ups for the comprehensive read.My thoughts :
I think Intel really dropped the ball by not having unlinked clocks for each core, like qualcomm has for it's s4 pro processors. There are so many times that, for instance, i have a page open with some animated GIFs. They are strictly single thread processes and they won't let the processor go to idle state. And this is a very VERY common occurance that can IMO, only be solved by adopting unlocked states for each core. 3 cores can stay in sleep state (almost perpetually) and the processor runs on a single core with lowered frequency. THAT would be power efficient.
dagamer34 - Friday, October 5, 2012 - link
Uhh... isn't turning off unused cores and overclocking the 4th core within TDP to perform single threaded tasks exactly what Turbo Boost introduced in Sandy Bridge is?know of fence - Friday, October 5, 2012 - link
Reducing power is great and also inevitable, but Intel's move to compete against everything and everybody is alarming. With everyone trying to follow/please Apple, that means nothing good for the consumer, throw-away luxury electronics for exceptionally well groomed masses.Also, isn't it too early to be hyping this stuff?
A5 - Friday, October 5, 2012 - link
Intel has to compete against ARM to keep them from taking over the "good-enough" computing space.As for the rest of it, you're not making any sense.
jjj - Friday, October 5, 2012 - link
The ARM problem is not about the product but about price, long term the CPU/SoC ASP will drop hard ,there is competition now. Servers will keep them on life support for a while but without fundamental changes to their business model they can't make it.Intel should remember how they won the market .
dishayu - Friday, October 5, 2012 - link
It's about both. Intel does not have sufficinetly low power parts at all, regardless the price point.mrdude - Friday, October 5, 2012 - link
Regardless of whether they step foot into that end of the spectrum or not (and by Anand's analysis that's more likely with Broadwell and on?), they still need to compete on price.It's one thing to make a chip, it's quote another to make it competitive with respect to pricing. What works against a distant AMD won't work against ARM.
DesDizzy - Sunday, October 7, 2012 - link
I agree. This seems to be something that most people overlook when addressing the Wintel monopoly. The costs of Wintel products are high within the PC/Laptop space. The price of ARM/Apps are cheap within the Smartphone/Tab space. How do Wintel square this circle without damaging their business model?Krysto - Friday, October 5, 2012 - link
You may not agree with Charlie, Anand, but reality seems to agree with him:http://www.techradar.com/news/computing/apple/appl...
I really don't know how you can think Apple would ever start using Intel chips in their iPads when Apple has already proven they want to make their own chips with A6.
Also, according to Charlie, Haswell will be like 40% more expensive than IVB. Atom tablets already seem to start at like $800. So I wish Intel good luck with that. Ultrabooks and Win8 hybrids won't drop down in price any time soon.
http://semiaccurate.com/2012/10/03/oems-call-intel...
Penti - Friday, October 5, 2012 - link
I don't know how you could fail so much in reading comprehension, Anand only said the same flying spaghetti monster-damn form factor. Nothing else. There also must be an ecosystem, but if you can run the same app on a tablet as well as a desktop on x86 with more performance then ARM why wouldn't you see vendors use it. It is a full system even capable of building itself. It's not about killing ARM. Intel still uses it, they need fairly high-performance RISC chips for stuff like baseband. They had a large markets in smart-phones before 2006 and they made the choice to sell it because they had Atom in their lineup. They didn't forget about it.It's Microsoft tablets that costs 500-900 dollars even on Atom, but they only need to compete with Windows RT which is totally retarded as far as corporate customers go and not the same system as 8 Pro, doesn't run the same software. An Android tablet could use a Z2460 (and coming Z2580, after that Valleyview SoC's) and build a 240 dollar tablet. There is no price difference to be had as far as hardware is concerned. Windows 8 tablets are a whole other form factor and device to begin with. Most will have keyboard and multitouch trackpad.
He only talks about the same form factor, size and battery life here. In the Microsoft ecosystem there is really no reason to go to Windows RT powered ARM-devices which doesn't have better performance and runs no third party desktop (Win32/Full Windows SDK) software. It also lacks the same features in other areas which makes them devices instead of general computing platforms. Remember they offer both here. Hell the built in email is even worse then the one built into Android since version 3.0 or so, it's a lot worse then Third party mail-clients in Android, it's worse then mail-clients in Blackberry 10, Symbian, iOS and so on. If your replacing a desktop your not going with ARM here, not on a Windows device at least, Anand only talks about a new bread of DTR Tablets and Ultra-portables that will fit in the same form factor and battery life scenarios as ARM-tablets. Apple certainly don't need to participate here.
Intel certainly has sales to be made if they move Haswell down to low-power Atom territory when it comes out later next year. They could be used as the only computing device you have (smartphone + hybrid tablet-pc). Replacing desktops, ARM/ATOM-tablets, media PCs for your TV (just stream with Miracast). Et cetera. ARM-devices would just be cheaper less capable devices there. But it's still different targets. Haswell still targets server (enterprise-market), desktop, notebooks with larger form-factor/power-usage, as well as more portable stuff. Atom is still for the handheld stuff you use with one hand. ARM has moved quiet fast but they have no reason to target high-performance applications or built 100W SoC's that is fast without parallel computing. Applications like high-performance routers for example still uses licensed and custom MIPS and PowerPC chips. There are plenty of markets where a full feature ARM Cortex or x86 won't work either. ARM is just moving into the multimedia-field, replacing customs architectures in TV's, displacing MIPS, PPC etc. If Apple builds a very large custom CPU-architecture compatible with ARM ISA for workstations, notebooks etc they will just be in the same position they were with PowerPC and have to compete with the high-performance chips that most can't compete with, even with much larger resources then Apple. Apple and Samsung has no reason in doing so outside handheld devices, low-power servers, consumer oriented routers, streaming media boxes which leaves plenty of room for Intel and all the rest. Plus WiFi and wireless baseband in a huge market in of it self and it doesn't matter what the application processor architecture is. Stuff like ARM has competed because you could replace previous products with it easily, thus taking some of the SoC-market away from other, but that coincides with the choice to do so.
Anand Lal Shimpi - Friday, October 5, 2012 - link
It's the other way around: not talking about Apple using Intel in iPads, but rather Apple ditching Intel in the MacBook Air.I do agree with Charlie in that there's a lot of pressure within Apple to move more designs away from Intel and to something home grown. I suspect what we'll see is the introduction of new ARM based form factors that might slowly shift revenue away from the traditional Macs rather than something as simple as dropping an Ax SoC in a MacBook Air.
Take care,
Anand
A5 - Friday, October 5, 2012 - link
Yeah. I knew what you were getting at, but I guess it wasn't that obvious for some people :-p.Something like an iPad 3 with an Apple-made keyboard case + some changes in iOS would make Intel and notebook OEMs really scared.
tipoo - Friday, October 5, 2012 - link
So pretty much the Surface tablet. The keyboard case looks amazing, can't wait to try one.Kevin G - Friday, October 5, 2012 - link
Apple is in the unique position that they could go with either platform way. They are capable of moving iOS to x86 or OS X to ARM on seemingly a whim. Their decision would be dictated not by current and chips arriving in the short term (Haswell and the Cortex A15) but rather long term road maps. Apple would be willing ditch their own CPU design if it brought a clear power, performance and process advantage from what they could do themselves. The reason why Apple manufactured an ARM chip themselves is that they couldn't get the power and performance out of SoC's from other companies.The message Intel wants to send to Apple is that Haswell (and then Broadwell) can compete in the ultra mobile market. Intel also knows the risk to them if Apple sticks to ARM: Apple is the dominate player in the tablet market and one of the major players in the cell phone market and pretty much the only success in the utlrabook segment. Apple's success is eating away the PC market which is Intel's bread and butter in x86 chip sales. So for the moment Intel is actively promoting Apple's competitors in the ultrabook segment and assist in 10W Ivy Bridge and 10W Haswell tablet designs.
If Intel can't get anyone to beat Apple, they might as well join them over the long run. This would also explain Intel toying with the idea of becoming a foundry. If Intel doesn't get their x86 chips into the iPad/iPhone, they might as well manufacture the ARM chips that do. Apple is also one of the few companies who would be willing to pay a premium for Intel foundry access (and the extra ARM not x86 premium).
So there are four scenarios that could play out in the long term: the status quo of x86 for OS X + ARM for iOS, x86 for both OS X + iOS, ARM for both OS X + iOS and ARM built by Intel for OS X + iOS.
Peanutsrevenge - Friday, October 5, 2012 - link
I will LMAO if Apple switch macs back to RISC in the next few years.Will be RISC, x86, RISC in the space of a decade.
Poor Crapple users having to keep swapping their software.
I laughed 6 ago, and I'll laugh again :D
Kevin G - Saturday, October 6, 2012 - link
But it wouldn't be the same RISC. ARM isn't PowerPC.And hey, Apple did go from CISC to RISC to back to CISC again for their Macs.
Penti - Saturday, October 6, 2012 - link
They hardly would want to be in the situation where they have to compete with Intel and Intel's performance again. Also their PC/Mac lineup is just so much smaller then the mobile market they have, why would they create teams of thousands of engineers (which they don't have) to create workstation processors for their mobile workstations and mac pro's? They couldn't really do that with PowerPC design despite having influence on chip architecture, they lost out in the race and just grows more dependent on other external suppliers and those Macs would loose the ability to run Boot camp'd or virtualized Windows. It's not the same x86 as it was in 2006 either.A switch would turn Macs into toys rather then creative and engineering tools. It would create an disadvantage with all the tools developed for x86 and if they drop high-end they might as well turn themselves into an mobile computing company and port their development tools to Windows. As it's not like they will replace all the client and server systems in the world or even aspire to.
I don't have anything against ARM creping into desktops. But they really has no reason to segment their system into ARM or x86. It's much easier to keep the iOS vs OS X divide.
Haswell will give you ARM or Atom (Z2760) battery life for just some hundred dollars more or so. If they can support the software better those machine will be loaded with software worth thousands of dollar per machine/user any way. Were the weaker machines simply can't run most of that. Casual users can still go with Atom if they want something weaker/cheaper or another ecosystem altogether.
Kevin G - Monday, October 8, 2012 - link
The market is less about performance now as even taking a few steps backward a user has a 'good enough' performance. It is about gaining mobility which is driven by reduction in power consumption.Would Apple want to compete with Intel's Xeon's line up? No and well, Apple isn't even trying to stay on the cutting edge there (their Mac Pro's are essentially a 3 year old design with moderate processor speed bumps in 2010 and 2012). If Apple was serious about performance here, they'd have a dual LGA 2011 Xeon as their flagship system. The creative and engineering types have been eager for such a system which Apple has effectively told them to look elsewhere for such a workstation.
With regards to virtualization, yes it would be a step backward not to be able to run x86 based VM's but ARM has defined their own VM extensions. So while OS X would lose the ability to host x86 based Windows VM's, their ARM hardware could native run OS X with an iOS guest, an Android guest or a Windows RT guest. There is also brute force emulation to get the job done if need be.
Moving to pure ARM is a valid path for iOS and OS X is a valid path for Apple though it is not their only long term option.
Penti - Tuesday, October 9, 2012 - link
You will not be able to license Windows RT at all as an end-user. Apple has no interest what so ever to support GNU/Linux based ARM-VMs.I'm sure they will update the Mac Pro the reason behind it is largely thanks to Intel themselves. That's not their only workstation though, and yes performance is important in the mobile (notebook space), performance per watt is really important too. If they want mobile workstations and engineering type machines they won't go with ARM. As it does mean they would have to compete with Intel. They could buy a firm with an x86 license and outdo Intel if they were really capable of that. ISA doesn't really matter here expect when it comes to tools.
baba264 - Friday, October 5, 2012 - link
"Within 8 years many expect all mainstream computing to move to smartphones, or whatever other ultra portable form factor computing device we're carrying around at that point."I don't know if I am in a minority or what, but I really don't see myself giving up my desktop anytime soon. I love my mechanical keyboard my large screen and my computing power. So I have to wander if I'm just an edge case or if analyst are reading too much in the rise of the smartphone.
Great article otherwise :).
A5 - Friday, October 5, 2012 - link
8 years is a loooooong time in this space, and yes you (and most people here) are in the minority.Notebooks have been outselling desktops for several years, and in 2011 smartphone shipments were higher than all PC form-factors combined. It's pretty clear where the big bucks are going, and it isn't desktop PCs.
flamethrower - Friday, October 5, 2012 - link
In 8 years you'll have 50-inch OLED TVs on your walls. What's going to drive them? Possibly a computer integrated into them.Peanutsrevenge - Friday, October 5, 2012 - link
We'll just be using large screens, keyboards and mice wireless connected to our ultra portable devices.The desktop will likely still exist for people like us who frequent this site, however it's role will be far more specialised, possibly more as our personal cloud servers than our PCs.
yankeeDDL - Friday, October 5, 2012 - link
Wow. Thanks for the excellent article: I really enjoyed it.The thought of having a processor of the power level of Ivy bridge in my mobile phone blows my mind.
Honestly though, I really can't see how the volume of CPUs for desktop PCs and servers is going to drop so dramatically, that Intel will need the volume generated by mobile, to "survive".
Yes, of course more volume will help, but 8 years from now, even if the mobiles will have such kind of computational power, I would imagine that a Desktop would have 10~20x that performance, as it is today.
It's true that today's CPUs are typically more powerful than the average user ever needs, but raise the hand who wouldn't trade his CPU for one 10x faster (in the same power envelope) ...
That said, 10W still seems like a lot to fit in a mobile: who knows the power consumption of high-end mobile CPUs today? (quad-core Krait CPU, for example, or even Tegra3)
dagamer34 - Friday, October 5, 2012 - link
Intel's real problem is that the power needed for "good enough" computing in a typical desktop CPU came a couple of years ago Nd is rapidly approaching in mobile. With more and more tasks being offloaded to the cloud, battery life is becoming a stronger and stronger focus.What's sad is that because AMD isn't the major player it once was, Intel has allowed it's eye off the ball, revving Atom with only minor tweaks and having a laissez faire approach to GPU performance. It's only been recently when mobile has started to dominate in the minds of consumers and Intel's lack of any major design wins (the RAZR I doesn't count) which has forced Intel to push as hard as it is now.
sp3x0ps - Friday, October 5, 2012 - link
Where is the iPhone 5 review? I need details!! arghh.Demon-Xanth - Friday, October 5, 2012 - link
Atom was targeted to UMPCs, but quickly took over low power embedded systems who don't need much power but do run Windows.tipoo - Friday, October 5, 2012 - link
Poor Via.dgingeri - Friday, October 5, 2012 - link
"Within 8 years many expect all mainstream computing to move to smartphones, or whatever other ultra portable form factor computing device we're carrying around at that point."They said the same thing about laptops. Sure, laptops hold about 60-65% of the market these days, but the desktop is still very much around, and is the preferred platform for PC gamers and HTPC applications. They're far more flexible than any mobile form factor.
Smartphones also have the severe disadvantage of a very small screen. Even the largest are too small for most people to deal with. On top of that, actually surfing the net on those tiny screens is an exercise in frustration for many people. I try to tap on a link, only to get the link next to it, or above it, or below it, or possibly having my stupid phone just select the text instead of following the link.
Smartphones have their niche. There's no doubt there, but they are not going to be anyone's mainstream device unless they have needle thin fingers and 20/10 vision.
Anand Lal Shimpi - Friday, October 5, 2012 - link
I agree with the notebook/desktop comparison - these form factors won't go away. I should have said the majority of mainstream client computing goes to smartphones. And solving the display and input problems is easy: wireless display (WiDi/Miracast) and wireless keyboard/mouse (or a dock that does both over wires if you'd rather that).Take care,
Anand
FunBunny2 - Friday, October 5, 2012 - link
While not a hardware issue (and thus not an AnandTech major venue), I would be amused if one of your writers explored the implications on data storage design (normal form databases vs. traditional files) of small real estate mobile. My take is that small, consistent bites of bytes are required, and will eventually change how data is stored on the servers. Any takers?lukarak - Saturday, October 6, 2012 - link
In other words, "....all cars were trucks....."?BoloMKXXVIII - Friday, October 5, 2012 - link
Very well written article. Other sites should read Anandtech to see how it should be done.Thank you.
All this power saving in idle conditions is great (love the looping of frame buffer idea), but users aren't always reading text on their screens. When these chips are under load they are still going to draw very significant amounts of power. Unless battery technology improves by an order of magnatude I don't see Haswell (or its replacements) fitting into ultraportable devices like phones or "phablets". The other comments concerning AMD are on the mark. AMD is in big trouble. They are too far behind Intel right now and every indication is they will be falling further behind.
silverblue - Friday, October 5, 2012 - link
Steamroller will haul AMD back towards Intel. Not completely, but a lot closer than they have been, and potentially even ahead in some cases. Still, that process deficit has to be painful, as AMD can only win on idle power.I really hope GF don't mess up again, as delays really are costing AMD dearly. Steamroller is a good design, the sort that means AMD can have a cheaper but still decent part, but I fear it'll come too late.
Intel CPUs are looking even more tasty than ever.
overseer - Friday, October 5, 2012 - link
Great Article.Then I sincerely hope AMD can still survive and stride forward in this mobile tide. (R.R. and J.K., you reading this?)
It may look silly but I do like underdogs and their (solid) products, especially when they achieved something with less talents, capital and executiveness.
wumpus - Friday, October 5, 2012 - link
"To put it in perspective, you'll be able to get something faster than an Ivy Bridge Ultrabook or MacBook Air, in something the size of your smartphone, in fewer than 8 years". I can tell you right now, while this architecture is absolutely great on a motherboard, this isn't the right path to the mobile space."Haswell is the first step of a long term solution to the ARM problem." Unfortunately, anandtech is one of the few places left that can call intel on this marketing blather. Intel's ARM problem is that there is no more efficient way to execute instructions than on a in-order, single instruction issue, clean RISC design: all of which are standard features on an ARM. ARM's intel problem is that this limits you to about .5GIPS ([G]meanless indicator of processor speed) compared to over 6GIPS on an all out Intel design.
The choice isn't all or nothing, just that this time Intel choose performance over efficiency. MIPS, alpha, (to a large part) PowerPC all fell to high performance Intel chips that were vastly less complex than current designs. ARM could try to compete with Intel on performance, but if they are lucky they will end up like AMD, and if they can't out design Intel (remember Intel's process advantage) they will end up like MIPS, etc.
The reason this all appears to be built around speed (and not efficiency) can be found on pages 7 and 8 (despite protests listed on those pages). Intel needs to add wider execution paths to try to get a tiny few more instructions out per second, all the while holding even more (than ivy or sandy) instructions in flight in case it can execute one. All this means a much longer path for any instruction and many more things computed, more leaky transistors leaking picoamps, more latches burning nanowatts. All ARM has to do is execute one after another.
I am surprised that they bothered to toot their horn about the GPU. It might beat ARM, but any code that can be made to fit a GPU should be run on an AMD machine (or possibly discrete nVidia board). They have been pushing Intel graphics for at least 15 years, don't pretend they are ever going to get it right.
In conclusion, I want one of these in my desktop. A phone CPU should look much more like an early core (maybe core2) design, maybe even more like a pentium pro.
A5 - Friday, October 5, 2012 - link
If we're going to start a RISC/CISC battle, you should really look at a modern ARM architecture before talking.What you can fit in a phone today isn't going to be what you can fit in a phone 8 years from now (in terms of both TDP and die size).
Getting Haswell-class performance from a 2020 smartphone isn't that far-fetched...you can argue that modern smartphone SoCs are close to the performance of the Athlon 64 2800+ or the Prescott Pentium 4s of 2004 in a lot of tasks.
wumpus - Friday, October 5, 2012 - link
There is a reason Atom is getting creamed in the phone space by ARM. Also the only way TDP is going to change is with major increases in battery technology. X Joules (typically changed to W/hr in battery speak, but why not stick with SI units) means X seconds a 1 W or X/n seconds at n Watts.On the high end, everything that won the war for CISC (namely, Intel's manufacturing skills) is even more true than when they won. There isn't going to be another. That doesn't mean that a chip designed for all out performance is going to have any business competing with ARM on MIP/W. If they wanted to compete on battery life, they would have scaled down the depth and breadth of the queue, not increased it.
Actually, I was ready to go into full rant when I saw the opening. Then I checked that "ultrabook" meant 1.8GHz i3s. It is quite possible (although I still doubt it is a good way to use a battery) to build a chip that will do that and have low power. I just don't think that Haskel is anyway designed to be that chip
FunBunny2 - Friday, October 5, 2012 - link
-- everything that won the war for CISC (namely, Intel's manufacturing skills) is even more true than when they wonIt's been true since P4 that the "real" cpu is a RISC engine fronted by a x86 ISA translator. Intel tried to sell a ISA level RISC chip (twice). Not so hot. But Intel does know RISC. I've always wondered why they used all that transistor budget the way they did, rather than doing the entire instruction set in hardware, as they could have. It's as if IBM turned all the 370s into 360/30s.
Penti - Saturday, October 6, 2012 - link
It was Pentium Pro that switched to a modern out of order micro-ops powered CPU. I.e. P6. It's only the front end that speaks x86. Intel's own RISC designs like i960 ultimately failed and EPIC even more so when it failed to outdo AMD and Intel server processors in enterprise applications. In reality customers only switched to Itanium because they already had made up their mind before there even was any product thus killing at the time more appropriate Alpha, MIPS and PA-RISC processors. But as soon as those where fased out, Intel's x86 compatible chips had already gained the enterprise features that it missed previously and that set those older chips apart.The front end and x86 decode doesn't use that much space in modern processors at all. CPU architecture aren't really all that important it's today largely about the features it supports, the gpu, video decode/processor etc. ARM just made it into the out-of-order superscalar era in 2011 with A9, superscalar in-order in 2008 with Cortex A8. Atom is kinda designed like a P5 cpu. I.e. superscalar in-order, and moves to an out-of-order design next year. Intel's first superscalar design was in 1988.
ARM just needs to be fast enough, it was fairly easy to replace SH3, Motorola DragonBall, i386 design in the mobile space it was even Intel that did it to a large part. And earlier 8086-stuff had already been left behind by that time. Now what's impressive is the integration and finish of the ARM SoC's. It was Intel that didn't want companies like Research In Motion to continue use low-power Intel x86-chips in their handheld devices. That only changes when Intel sold off the StrongARM/XScale line in 2006. Intel has no reason to start create custom ARM ISA chips again as they can compete with them with x86 chips which they spend much larger time to adapt development tools and frameworks for any way. Atom as a whole has a much larger market then XScale had on it's own. Remember that Intel dropped stuff like RAID/Storage-processors too. Having Intel as a Marvell in ARM chips today wouldn't have changed anything radically.
Penti - Saturday, October 6, 2012 - link
Also FPU/SIMD has been a large part in later ARM designs and implementations. It's really a big deal as we saw with the chips lacking some of those parts. You shouldn't forget how important those bits are. Others have failed because they didn't take it seriously. That was 15-20 years ago even. Doesn't mean they are yet fighting x86-64 chips in high-end servers and workstation though. We will certainly see them entering that market by 2015 though.Arbee - Friday, October 5, 2012 - link
Cortex A9's big IPC improvement came from going out-of-order, which kind of ruins your argument.Similarly, the X360/PS3 PowerPC chips are strict in order and super ultra slow as a result - at 3.2 GHz they can't match a PowerMac G5 with out-of-order at 2.2 GHz. But I suspect that wasn't the point - Sony and MS can claim the eye-popping (in 2006) 3.2 GHz figure, and the heat production is certainly less than a PPC G5.
wumpus - Friday, October 5, 2012 - link
Has anyone seen an A9 in the wild? I don't doubt huge IPC improvements (back when O-O-O was new, it tended to double performance). My statement is that it will kill GIPS/W and that Intel can much more easily design a chip that can beat it in both raw performance and GIPS/W (note that your mention of heat production agrees with me).Also note I suspect that the goal of A9 is to keep the power low enough to keep it out of where Intel wants to go. A rough guess is that ARM might have a chance with dual issue o-o-o, but past that (roughly where Pentium Pro was designed) they can't really go.
ElvenLemming - Friday, October 5, 2012 - link
The Cortex A9 has been in most major phone/tablet SoCs for the past two or so years. Apple's A5, A5X; Samsung's Exynos 4210, 4212, 4412; TI's OMAP 4 series; Nvidia's Tegra 2 and 3.Cortex A15 is probably what you were thinking of that we've yet to see out in the wild. It's out-of-order like the A9, but with a great deal of other improvements.
ericore - Friday, October 5, 2012 - link
Currently AMD has the upper hand on the notebook segment on battery life. Haswell changes that, but as is always the case with Intel, they will be pricey. And that's why AMD will still have 50% of the market because vendors are cheap.Power savings are much less relevant on desktop front; I don't care so much about power as i do of heat. AMD X4 700, ship an awsome 4 core cpu for 75$. Technically, it has all that you need from a CPU. Add a Radeon 7770 (again cheap) and your golden. Ya Intel is faster, but both Intel and Nvidia have shitty low end products and that's even more true when you think of atom. 5-15% single threaded performance is not anything that is going to burry AMD lol.
On top of that, AMD has an atom KILLER, a contracts with all major console vendors.
Haswell will have surprisingly little impact on AMD; what I am saying is if you look at your own expectations, you'll realize they were highly inflated and you'll wonder why it didn't do more damage to AMD. I've explained the why. Nevertheless broadwell is a significant threat, and we'll probably see AMD start to lose market share (much more than with haswell) unless AMD can fight back and it will; but nobody knows if it will be enough.
A5 - Friday, October 5, 2012 - link
Uh, wow.Zink - Saturday, October 6, 2012 - link
http://www.tomshardware.com/reviews/gaming-cpu-rev...tipoo - Friday, October 5, 2012 - link
"Overall performance gains should be about 2x for GT3 (presumably with eDRAM) over HD 4000 in a high TDP part."Does this mean the regular GT3 without eDRAM cache will be twice the performance of the HD4000 and the one with the cache will be 4x? Or that the one with the cache will be 2x? In which case, what would the one with no cache perform like, with so many more EUs the first is probably correct, right?
tipoo - Friday, October 5, 2012 - link
"presumably with eDRAM"...So the GT3 in Haswel has over double the EUs of Ivy Bridge, but without the cache it doesn't even get to 2x the performance? Seems off to me, doesn't it seem like the GT3 on its own would be 2x the performance while the eDRAM cache would make for another 2x?DanNeely - Saturday, October 6, 2012 - link
It probably means that, like AMD, Intel is hitting the wall on memory bandwidth for IGPs. When it finally arrives, DDR4 will shake things up a bit; but DDR3 just isn't fast enough.tipoo - Sunday, October 7, 2012 - link
I don't think so, doesn't the HD4000 have more bandwidth to work with than AMDs APUs yet offers worse performance? They still had headroom there. I think it's just for TDP, they limit how much power the GPUs can use since the architecture is oriented at mobile.magnimus1 - Friday, October 5, 2012 - link
Would love to hear your take on how Intel's latest and greatest fares against Qualcomm's latest and greatest!cosmotic - Friday, October 5, 2012 - link
Ah, an MPEG2 encoder. Just in time!jamyryals - Friday, October 5, 2012 - link
This made me :)name99 - Friday, October 5, 2012 - link
We laugh but one possibility is that Intel hopes to sell Haswell's inside US broadcast equipment.There isn't much broadcast equipment sold, but the costs are massive, and there's no obvious reason not to replace much of that custom hardware with intel chips.
And much of the existing broadcast hardware (at least the MPEG2-encoding part) is obviously garbage --- the artifacts I see on broadcast TV are bad even for the prime-time networks, and are truly awful for the budget independent operators.
Much like they have written a cell-tower stack to run on i7's to replace the similarly grossly over-priced custom hardware that lives in cell towers, and are currently deploying in China. Anand wrote about this about two weeks ago.
vt1hun - Friday, October 5, 2012 - link
Do you have an idea when Intel will move to DDR4 ? Not with Haswell according to this article.Thank you
tipoo - Friday, October 5, 2012 - link
Haswell EX for servers will support DDR4, but even Broadwell on desktops is only DDR3, we won't see DDR4 in desktops until 2015.jwcalla - Friday, October 5, 2012 - link
We'll probably see DDR4 in the ARM space before we have it on Intel.Maybe this should be AMD's focus of attack: if they can't compete on performance, at least try on chipset features.
Perhaps Intel's biggest concern would be if somebody comes along with a super-efficient x86 emulator for ARM. Going forward, "legacy applications" is going to be an increasingly important selling point to prevent ARM inroads on the low end.
Microsoft keeping their Windows ARM version locked-down is a key to that too, and likely a deference to their relationship with Intel. But Apple is less likely to similarly constrain themselves.
meloz - Saturday, October 6, 2012 - link
>We'll probably see DDR4 in the ARM space before we have it on Intel.>Maybe this should be AMD's focus of attack: if they can't compete on performance, at least try on chipset features.
The problem with DDR4 is likely going to be the price. We all know how the memory industry likes to jack up the prices whenever a new spec comes out. Remember how expensive DDr3 was when it started to replace DDR2?
Some people joke that this transition is the only time they make any money in the RAM business, and considering the low prices of DDR3 you have to wonder.
DDR4 might offer some performance and power advantage on release, but it will likely be more expensive and take time (12-18 months?) to offer a compelling performance / $ advantage over cheap DDR3 variants.
If AMD is trying to position itself as 'value' brand, chaining themselves to DDR4 (before Intel's volume brings down the prices for everyone) could spell their doom.
Kevin G - Friday, October 5, 2012 - link
Intel is set to launch Ivy Bridge EX on a new socket late in 2013 on a new socket. The on-die controller will likely use memory buffering similar to what Nehalem-EX and Westmere-EX use. The buffer chips may initially use DDR3 but this would allow for a trivial migration to DDR4 since the on-die controller doesn't communicate directly with the memory chips.Come to think of it, Intel could migration Nehalem-EX/Westmere-EX to DDR4 with a chipset upgrade. Vendors like HP put the buffer chips and memory slots on a daughter card so only that part would need replacement.
rundll - Friday, October 5, 2012 - link
Four cores and 95 W tdp.What is this?
meloz - Friday, October 5, 2012 - link
Yes this caught my eye and I would like an answer, too.Maybe it is one SKU with GT3 for desktop? Or maybe it is a 6 core part?
Or maybe.....it is the mother of all overclocking processors. Muhahahahah!
Kevin G - Friday, October 5, 2012 - link
I suspect that 95W is the rated socket limit. This is similar to how Intel advertises Ivy Bridge at 77 W on the desktop but tells motherboard manufacturers to build around the higher 95 W figure.What is odd is that Haswell will move some of the VRM circuitry on the package which should restrict just how far off that 95W figure motherboards can deviate.
meloz - Friday, October 5, 2012 - link
What a great article, Anand!Felt so good to read a 'proper' Anandtech article after so long, instead of the usual Apple worship and cheap fillers.
Haswell is looking very good. Would make an ideal upgrade for Sandy Bridge users. AMD is done, but thankfully Intel sees some threat from ARM so that will keep them innovating.
I hope Intel make a sensible choice with Haswell SKUs and get away from their artifical crippling and segmentation tendencies. That's about the only thing that can ruin Haswell.
Wolfpup - Friday, October 5, 2012 - link
Once again they bump up the number of transistors being used on their worthless video-and this time they even lower CPU performance (L3 cache) to appease their worthless video.Interesting article, but I guess I misunderstood previous articles...I thought Conroe through Ivy Bridge had 4 integer execution units per core? (As does Piledriver?)
haukionkannel - Friday, October 5, 2012 - link
Good article and information that you need win 8 to fully utilize Haswell was new information to me. It will be interesting to see how much better Haswell will be with win 8 compared to win 7. Seems to be same kind of dilemma as with AMD Bulldoser/piledriver where there seems to be some kind of better performance with new OS, but how much will reamain to be seen.Belard - Friday, October 5, 2012 - link
Apple owns various CPU tech and design companies such as P.A. Semi. They can build their own CPUs (not x86 of course)...Apple will do what they can to take out the middleman.
jwcalla - Friday, October 5, 2012 - link
Apple doesn't have any fabs though and if Samsung isn't willing to re-sign another contract, they're going to be in a bit of a bind. In other words, it won't be cheap. And even if Samsung does re-up, you can be sure that it'll come with an additional $1.05b price tag to offset any "losses" in their mobile division.I felt the first page overestimated Apple's influence quite a bit. They have ~5% desktop marketshare and 0% in the server space. Not to trivialize any loss in CPU sales, but Intel's primary headwinds don't involve a possible Apple switch to ARM.
Kevin G - Friday, October 5, 2012 - link
Apple's influence comes from the mobile market which is beginning to dwarf the PC market (and is larger than the server market in terms of volume). Apple is the largest tablet maker and a major smart phone manufacturer. There hardware is backed by one of the largest digital media markets. To do this Apple is the worlds largest consumer of flash memory whom orders are large enough to directly affect NAND pricing.With the rest of the industry going ultra mobile, they'll have to compete with Apple who is already entrenched. Sure the PC will survive but mainly for legacy work and applications. Their isn't enough of a PC market in the future to be viable long term with so many players.
jwcalla - Friday, October 5, 2012 - link
While all this is true, the first page seems to indicate that Intel is really pushing the low power envelop partly because of rumors that Apple will move away from Intel chips in their laptop / ultrabook products.While I'm sure Intel is happy to be in MBAs, etc., losing that business isn't going to be as big a deal as the other pressures facing the PC market (as you mention).
Now if WinRT on ultrabooks / laptops began to take off... that would be a huge problem for Intel.
Kevin G - Saturday, October 6, 2012 - link
Losing just the MacBook AIr isn't going to hurt Intel much as a whole but it is doubtful that Apple would just move that product line to ARM. The rest of the line up would likely follow. The results by the numbers would hurt Intel but nothing to doom the company. Intel does have the rest of the PC industry to fall back upon... except the PC market is shrinking.Apple is one of Intel's best gateway into the ultra mobile market. Apple has made indications that they want to merge iOS and OS X over the long term which would likely result in dropping either ARM or x86 hardware to simplify the line up.
WinRT is also a threat to Intel and
Kevin G - Saturday, October 6, 2012 - link
(Hrm... got cut off there)WinRT is also a threat to Intel but WinRT has next to zero market share. The threat here is any success it obtains. Apple on the other hand controls ~75% of the tablet market last I checked.
Andriod is a bit neutral to Intel as manufacturers can transition between ARM and x86 versions with relative ease. Intel will just have to offer competitive hardware at competitive prices here. The sub 10W Haswell parts are going to be competitive but price is a great unknown. The ARM SoC's are far cheaper than what Intel has traditionally been comfortable with. So even if Intel were to acquire all of the Android tablet market, it would be a minority at this time and over the short term (even in the best case scenario, it'd take time for Android based tablets to surpass the iPad in terms of market share).
So ultimately it would be best for Intel to snag Apple's support due to their dominant market share in the tablet space and influential position in the smart phone space.
andrewaggb - Friday, October 5, 2012 - link
Agree with others. Best Anandtech article I've read in a long time.Most articles lack the detail and insights that this one has.
mrdude - Friday, October 5, 2012 - link
Great article. Great depth, great info and very thorough. Hats off :)But I couldn't shake the feeling that I was missing perhaps the most important bit of information: price.
Obviously, Intel isn't going to give that away 9 months away from the presumed launch date -- though in typical fashion we'll see it leaked early. It still is the biggest question regarding Haswell's, and in turn Intel's, success against ARM.
I think most consumers are already at that good enough stage, where your Tegra 3 or Snapdragon S4 can fulfill all of their computing needs on a tablet or a phone. The biggest drawback for productivity purposes isn't necessarily the "lack of CPU performance" but rather the lack of a proper keyboard/mouse, gaming, along with a rare application or two that's still locked to x86 (Office rings a bell, though not for long). Or I should say, these were drawbacks. Not any longer.
So is Intel going to cut their margins and go for volume? Or are they just going to keep their massive margins and price themselves out of contention? Apple carries with itself a brand name that people want. It's become more than a gadget but a fashion accessory. People don't mind paying for Apple tax. I don't think I ever will, but at least I can notice the trend. The Intel brand doesn't carry with it the same cult following and neither does x86. Unless Intel is willing to compete with ARM on price, lowering the cost of their products below Apple's, I don't think think the substantial increases in efficiency and performance will matter all that much.
name99 - Friday, October 5, 2012 - link
"Sandy Bridge made ports 2 & 3 equal class citizens, with both capable of being used for load or store address calculation. In the past you could only do loads on port 2 and store addresses on port 3. Sandy Bridge's flexibility did a lot for load heavy code, which is quite common. Haswell's dedicated store address port should help in mixed workloads with lots of loads and stores."The rule of thumb numbers are, on "ordinary" integer type code:
1/6 instructions are branches
1/6 are writes
2/6 are reads
2/6 are ALU
This makes it more obvious why Intel moved as it did.
You want to sustain as close to 4ops/cycle as you can.
This means that your order of adding abilities should be exactly as Intel has done
- first two ALUs
- next two read/writes per cycle (ideal would be a mix of load/store) but Intel gave us that you can do a load+store per cycle
- next two loads per cycle
- next make sure the branches aren't throttled (because back-to-back branches are common, and you want branches resolved ASAP)
- next make the load-store system wide enough to sustain a MAC per cycle (two loads+store)
It's hard to see what is left to complain about at this level.
And of course we have better lock performance. So what's left?
What I think still have substantial room for improvement (correct me if I'm wrong) is
(a) TLB coverage
(b) TLB efficiency.
TLB coverage could be improved with a 2nd level TLB but (as far as I know) Intel doesn't go in for that, unlike POWER.
By TLB efficiency, I mean not needing to lose performance due to different address spaces. Unfortunately Intel seems screwed here. The POWER segment scheme (especially the 64-bit scheme) is REALLY powerful here in allowing multiple address spaces to coexist, so that multiple shared libraries, the main app code, IO, and memory mapped files, can all have persistent simultaneous TLB entries. (Note that this has nothing to do with the Intel segment scheme --- different technology, to solve a different problem.)
As far as I know, right now all Intel has is a single ASID representing a process. Better than no ASID, and having to flush the TLB on every context switch; but not especially good at sharing entries --- so (again as far as I know) shared libraries or shared mem-mapped files being used by multiple processes, even when they are mapped to the same address, have to have separate TLB entries, each one with a different ASID corresponding to the process calling them.
name99 - Friday, October 5, 2012 - link
Stupid me. I should have read the entire article. So we do have a (nicely sized 2nd level TLB).I guess my only remaining complaint now is that ASIDs are too coarse a tool.
In principle you could get dove some of the problems I mention using dedicated large pages for some particular purposes (eg to over the OS code and data, the equivalent of the frame buffer for modern windowing systems, and some pool of common shared libraries).
Does anyone know the extent to which both Windows and OSX actually make use of dedicated large pages in this way?
Peanutsrevenge - Friday, October 5, 2012 - link
Great article Anand, but when will Anand cloning be incorporated in CPU designs so we can all have one of you at home to pull out and extract information from @ will ? ?Although, with that said, I was already made aware of much of this recently from listening in to some random guys babbling about tech stuff on a podcast ;)
Rectified - Friday, October 5, 2012 - link
Anand, you write the best tech articles on the web. As a graduate student in computer engineering, I appreciate the practical yet technical analyses you write on the industry. Keep it up!Crazy1 - Friday, October 5, 2012 - link
I like the concept of Panel Self Refresh, yet I feel that Intel could implement this themselves. I'm not an expert, but couldn't a buffer be placed on the CPU package between the GPU and panel? This may not be as efficient as if the panel makers did it themselves and it would probably only work when using the IGP (when it would most likely have the greatest impact), but at least it is a step in the right direction.Additionally, Great Article! Anandtech provides some of the most thorough technology articles. Keep it up.
random2 - Saturday, October 6, 2012 - link
" If all mainstream client computing moves to smartphones,..........."Seriously? The idea of all mainstream computing done on nothing but smartphones seems to stretch the imagination just a bit much. There isn't even the most basic of businesses that do not have a computer (made with mainstream components as are most small and medium sized businesses) and business software. Don't forget the PC gamers and people who like larger viewing and typing surfaces. Or the fact that in eight years, home and business PC's will be blindingly fast with larger displays with much greater pixel density, possibly clear screen touch surfaces, likely alternative interfaces than just a keyboard and mouse and incredible computing and rendering power.
The likelihood of the general populace turning all their computing needs over to a palm size PC I see as kind of weird fantasy where people learn to love minute typing interfaces and squinting at hi density displays fit into 3.5by 4.5 inches for long periods of the day without interruption. No, to push the idea of micro computing one must discount all of the other advances in the computer/electronics industries in order to make their pet theory viable.
random2 - Saturday, October 6, 2012 - link
"The race to the bottom that we've seen in the LCD space made it unlikely that any of the panel vendors would be jumping at the opportunity to make their products more expensive."It's unfortunate, because of what might have been had the manufacturers, of which there are only three main ones, if I recall, had the foresight to market to customers that weren't just looking to buy the lowest priced panel on display at Best Buy. Had they the initiative to have started years ago, there would be some pretty fantastic panels available today for much more reasonable prices than seen for the 27 and 30 inch 2560X1600 panels today.
Klugfan - Saturday, October 6, 2012 - link
This doesn't really belong in the Haswell article, but I would love to know more about the physics and constraints of TDP. Like, hit me with a chart of TDP impact for a variety of important parts in phones, tablets, laptops, and desktops. Show me a chart of TDP budgets and mitigation strategies. Explain to me roughly how physics forces those things to relate. Please.Seems important and it's easy to understand the comparison from Ivy Bridge to Haswell but that doesn't feel like the big picture.
havoti97 - Saturday, October 6, 2012 - link
I read the 1st page then got bored. Writing style is overly wordy... am I the only the feeling this way?xeizo - Saturday, October 6, 2012 - link
It's an article, not a twitter feed! Some of us like to get the whole picture not just the flashy stuff ....watersb - Saturday, October 6, 2012 - link
Phenomenal feature, Anand! This is why I check your site each day. Thanks very much!bill4 - Saturday, October 6, 2012 - link
like atom, you're stuck in no mans land. way too high for tablets and phones, but in desktops and laptop, who cares if the amd solution uses 30 watts instead of 8? that difference isn't enough to matter when you take the whole platform into account, especially at lower price points where battery life wont be fantastic anyway. on the dsktop it's completely pointless.JlHADJOE - Sunday, October 7, 2012 - link
On a laptop using 30 watts instead of 8 will more than triple your battery life, especially at lower price points/smaller form factors where manufacturers gimp the battery.How's about browsing for 9 hours instead of 3? Or 27 hours instead of 9? I'd jump on it in a heartbeat.
1008anan - Saturday, October 6, 2012 - link
Haswell will sport 32 single precision or 16 double precision flops per cycle per core for its desktop and high tdp mobile skews [at least 30 watt and up].Can anyone speculate on how many single precision and double precision flops per cycle per core Haswell will execute for its low TDP skews? For example the less than 10 watt skews? the 15 watt skews?
I would also be interested in learning speculation about how many execution units (or shader cores if you prefer standard nomenclature) the low TDP Haswell products will have.
1008anan - Saturday, October 6, 2012 - link
Haswell will be able to execute 16 double precision or 32 single precision flops per clock per core for desktop and high TDP mobile skews [at least 30 watts and up].Can anyone speculate on how many flops per cycle per core the sub 10 watt and 15 watt Haswell skews will execute? Similarly I would be interested in hearing speculation about how many graphic execution units (shader cores) the sub 10 watt and 15 watt Haswell products will come with. Any speculation on graphics clock speed?
Is it possible that the high end tock 22 nm Xeon server parts could have 32 double precision or 64 single precision flops per clock per core?
Laststop311 - Saturday, October 6, 2012 - link
Best explanation of haswell I've read to date. Good Job Anand.lmcd - Saturday, October 6, 2012 - link
Interestingly, this might be the first chance in forever AMD has at competing with Intel. If Haswell's sole goal is to hit lower power targets, and Piledriver hits its 15% and Steamroller its 15% over that, AMD is suddenly right up with Intel's i5 series with its GPU-less chips, and upper i3-range with their APUs, which is absolutely perfect positioning: most i5 purchases are for people planning to pair with discrete graphics, while most i3 series seem to go to the PC buyer looking for low price tags.The one downside is that the i7 series is Intel's money-maker: the clueless people who think they're getting maximum performance but are really just feeding the binning system and buying an unbalanced PC.
milkod2001 - Sunday, October 7, 2012 - link
u got it wrong bro, Intels money maker is not i7, it's i3 and i5(low end and a bit of mainstreem)as for Haswell, on paper it looks too good to be true as Ivy did last year and ended up everything but impressive.
Since Intel conroe core(2006) there actually were not any significant improvements worth mentioning.There's not much extra what todays CPUs can do and Pentium4 could not a decade ago.
I would love to see some innovations user could really benefit from(something like reattachable,thin, light, portable, firm solar panel hooked at the back of screen or even build in as last layer into screen itself) and not that crap Intel/AMD gives us year by year.
xeizo - Sunday, October 7, 2012 - link
Anand is very right, it's everything about power savings which in effect makes smaller and more portable form factors possible!As for mainstream perfomance, my Linux workstation still uses a Q9450 rev. C1 from 2008 clocked at 3.2GHz and a SSD of course. That box feels in every way as snappy as my Windows-box with Sandy Bridge at 4.8GHz. Which means, I really didn't need more performance than what C2Q already gave. Of course the SB-box benchmarks much faster, about twice as fast in most things, but the point is for myself I really don't need that perfromance except for some occasional game.
But I could use a smaller, cooler running device instead!
Teknobug - Tuesday, October 16, 2012 - link
LOL my Linux system still runs a Sempron and it's still fast.oomjcv - Sunday, October 7, 2012 - link
Very interesting article, enjoyed reading it.Something I would like to see is a decent comparison between Intel's and AMD's plans. Many might be able to outline the basics, but a thorough article on the subject should be rather enlightening... Comparing their design philosophies, architectures, possible pitfalls and successes etc, pretty much what's been done with this article only with both companies.
I know it might be time consuming but I imagine it could be quite a nice read.
zwillx - Monday, January 21, 2013 - link
agreed; it's difficult to find the common ground with so many different chip architectures. x86 is a big enough competition but now it's getting split wide open with ARM and BIG/litle etc etc so it's always helpful to have either more charts or real world examples lol.My take from this article though: Haswell still won't have the prowess to beat the GT650. I have GTX660 in my laptop w/ Optimus (TM). It works. Runs a game on HD4000 at 17 FPS. On the GTX660 I get 100+ fps, and am able to use higher anti-aliasing settings. So, clearly a 100% improvement over Ivy bridge is only putting the chip into "mediocre" category by the time its released.
alexandrio - Sunday, October 7, 2012 - link
"The bigger concern is whether or not the OEMs and ISVs will do their best to really take advantage of what Haswell offers. I know one will, but will the rest?"I am curious who is that one OME that will do their best to really take advantage of Haswell offers?
zwillx - Monday, January 21, 2013 - link
Apple. Or are you joking. I personally hate Apple and have since the original iMac but their engineering is top notch when it comes to getting ideal performance from silicon to user. So.. guessing that's the reference.Silma - Monday, October 8, 2012 - link
A fine read, technically very comprehensive, but still overly melodramatic.While it is true that it is crucial for Intel to step a foot in the byod market some things still hold true:
- In value and profit the PC processor market is much bigger than the byod processor market and will stay so for years because PCs, especially business PCs won't disappear anytime soon.
- Nobody can touch Intel in this market, it has been proved for decades. Not AMD at the height of its success, not mighty IBM, not Sun, nobody.
- Contrary to what you say Intel has a definitive production advantage and there are very few fabs able to compete. Note that Apple is incapable of producing processors, it is dependent on external manufacturers.
- What Apple does with its processor is interesting business wise for its iPods/Pads/Phones, but Apple doesn't have the research power Intel and others have in the chip space and I can't see how it will innovate better than Intel and other competitors.
- Intel is aware of its shortcomings, is pushing tremendously in the right direction. A competitor that doesn't rest on its laurels is a mighty threat, ARM beware.
- If Apple stops using Intel processors, it will of course wipe a few hundred millions of Intel's turnover but won't be anything remotely dangerous for Intel
- It remains to be seen that Apple users will accept yet another platform change.
- It remains to be seen that it would make sense business-wise for Apple
- I am quite sure many phone companies will be open about renewed chip competition and not letting a single platform become too powerful.
All in all it seems to me Intel is as dangerous as ever, executing very well in its core business and heading towards great things in the phone/pad space.
johnsmith9875 - Thursday, October 11, 2012 - link
Why couldn't they at least stick to LGA2011?defiler99 - Tuesday, October 16, 2012 - link
One of the best articles on Anandtech in some time. This is great original tech industry reporting.Gc - Saturday, January 12, 2013 - link
Congratulations, an intel cpu engineer wrote around 27 Dec 2012:"... Anandtech's latest Haswell preview is also excellent; missing some key puzzle pieces to complete the picture and answer some open questions or correct some details but otherwise great. ..."
http://www.reddit.com/r/IAmA/comments/15iaet/iama_...
xaml - Thursday, May 23, 2013 - link
This was first posted here a few handfuls of pages back as a comment by user "telephone". ^^yhselp - Friday, March 29, 2013 - link
A few questions.Is there going to be a replacement (37W) for the current IVB 35W quad-core part? Quite a few designs are now dependable on this, lower power quad-core option - Sony S-series and Razer Blade, to name a few.
When can we expect all mobile CPUs (except maybe for the extreme series) to fall into the 10W-20W range? In three years' time and 10nm?
The decision to not include GT3 with desktop parts is very disappointing. A 35/45W low-voltage part with GT3 would make for an excellent HTPC build, among other things. Is there a chance Intel change their mind and start shipping GT3 desktop parts at some point?
JVimes - Tuesday, August 19, 2014 - link
Does EU stand for Execution Unit? That was surprisingly hard to google for.