Comments Locked

63 Comments

Back to Article

  • ibudic1 - Tuesday, May 17, 2011 - link

    I am also having doubts going beyond 14nm. The quantum physics will start to show its ugly head. I think that the transistor thickness will be too narrow for the quantum affects of electrons "jumping" over to another gate.

    I know that this was said before, but this time, I don't think they will be able to cheat physics.

    Also is anyone else nervous about 3 gate transistors?
  • ghost55 - Tuesday, May 17, 2011 - link

    they said the same thing about 92nm and the light diffusion limit, and look where we are today. my guess, is that they will go down to the transistor limit of one atom thick, ten atoms wide (they have made transistors that size) and then stuff like layered circuit boards, optical processors, and graphene will come into play. also, the 3-gate transistor will probably be a major step forward.
  • softdrinkviking - Tuesday, May 17, 2011 - link

    3 gate transistors seem fine to me, but they seem like a stopgap tech on the way to a whole new way of building transistors rather than a step in a new direction.
  • FunBunny2 - Tuesday, May 17, 2011 - link

    -- a whole new way of building transistors rather than a step in a new direction.

    Now, that's wishful thinking. If the limit is 1 X 10 atom device, transistor or otherwise, that's the limit. For an electrical device. Which brings up the various notions of bio or photon based device. Someone has to invent the valve/transistor/foo equivalent first, though.
  • numberoneoppa - Wednesday, May 18, 2011 - link

    Memristors!
  • therealnickdanger - Wednesday, May 18, 2011 - link

    Or maybe we'll just see a return of large computers! Can't build the chips smaller? Put more chips in! :)
  • wumpus - Friday, May 20, 2011 - link

    From memory, there have been light issues starting from closer to 1000nm (going submicron). Around 90nm (and really happening the next step down) leakage became an issue. This is even more apropos to the discussion in that the transistors themselves weren't working as they used to. Before you got cheaper, faster, lower power. After 90nm you had cheaper, faster, lower power: pick two (and if you don't work on lowering power heat will go up exponentially as you shrink your process).

    As we keep going, the number of miracles needed for each step pile up. I think that between the money and invested equipment, the show can go on, but don't expect it to be CMOS transistors.
  • kb3edk - Tuesday, May 17, 2011 - link

    14nm is not the end of the road but it's pretty darn close. We will probably see a 10nm process node around 2016 and then a "final" 6nm process in 2018 or so before quantum mechanics finally brings an end to the age of Moore's Law.

    I'm extrapolating out from this interview with an Intel engineer from a few years ago... http://www.webcitation.org/5hjItXYEI

    He seems very confident that Intel will just keep on chugging away with nanotubes and whatnot... I am less convinced however. I think that as this decade comes to a close, the only major improvements in CPUs will be in microarchitecture not lithography. It could even be the end of the road for x86 altogether, it's possible the CPUs of the 2020s are what we call GPUs today.
  • FunBunny2 - Tuesday, May 17, 2011 - link

    -- It could even be the end of the road for x86 altogether.

    IIRC, it's been years since Intel burned the instruction set into hardware. They've been using all those millions of transistors to a RISC hardware, with an emulation layer for the X86 on top. Why they did that rather than use the budget to make really fast X86 hardware is instructive. Even IBM, starting with the 360, knew enough to emulate the cheap machines and build the instruction set into hardware for the fast machines. The only plausible answer is to provide an escape hatch from X86.
  • Strunf - Wednesday, May 18, 2011 - link

    Millions of transistor out of billions don't seem to me to be much of an handicap. It's not the decode from x-86 to microOps that makes the x86 worst than anything else, this is just a small step on the whole processing process and probably the least demanding.
  • DanNeely - Wednesday, May 18, 2011 - link

    The truth is far more prosaic. Like in all instruction sets that were originally designed on the CISC model, some x86 instructions require much more work than others do. To keep the CPU from running much more slowly than it needed to 99% of the time implementing the instructions in microcode and then using more RISC like execution units internally is the obvious way to go.

    Lest you think that means RISC "won" the architecture war, MMX, SSE, and more recently on die IGPs as specialized instruction sets for specific subsets of the work are a CISC concept. The war ended when die space grew large enough to combine the best of both worlds on a single chip.
  • wumpus - Friday, May 20, 2011 - link

    The real driver of the CISC school of though was to make writing assembler as easy as a high level language (google "closing the semantic gap"). That idea is dead and buried. The only other issue that CISC might try to do is to reduce size of code. In modern terms, that means the pressure on the instruction cache (I could code hello world into a <20 byte .COM file in assembler. Don't ask what it compiles to now).

    SSE, MMX and all are a revival of vector design. The big catch is that CRAY machines included scatter/gather functions to access memory better. Not going to happen on current designs. The "latest thing" is the fused multiply-add (don't watch as intel and AMD botch the standard), which has been a DSP concept since forever. I suppose that all these ideas work to reduce the size of the inner loops (in bytes), but I wouldn't call them "CISC concepts, since often they appeared on RISC machines first.
  • ibudic1 - Wednesday, May 18, 2011 - link

    about 2016...

    At the beginning of the 2015 it will be obvious that the production of oil will drop by about 3% / year. Google Peak Oil. Since energy is inelastic in terms of price, this will send the prices of everything through the roof, and the world economy will be close to or will collapse.

    So if for no other technical reason I doubt that we will be seeing 10/11 nm soon or ever. If we do see 10/11 nm it will be many years down the road, probably not before 2020. A little after that or at about the same time US will have lost its dominance in both economic and military power as a world leader. - I've read an article from some CIA analyst claiming this.

    So be ready for 2015+.
  • Thermogenic - Friday, May 20, 2011 - link

    Well, if you read it on the Internet, I guess that means it has to be true!
  • ibudic1 - Sunday, May 22, 2011 - link

    Hm... where do you get your info?

    Your friend? Books? Newspapers? What's your point? Wiki articles for the uninitiated.

    http://en.wikipedia.org/wiki/Quantum_tunnelling

    http://en.wikipedia.org/wiki/Peak_oil

    http://en.wikipedia.org/wiki/Supply_and_demand.

    Thank you.
  • L. - Thursday, May 19, 2011 - link

    The age of Moore's Law will extend far beyond the limits of the current litographic process family, as there are many other ways to go faster, many unexploited yet because they were not the (only) best choice.
  • GullLars - Tuesday, May 24, 2011 - link

    Moore's Law describes the transistor density at the most optimal cost/transistor point, not power or speed of said transistors. If you could find a way to double transistor density at the same cost every 18 months without shrinking the transistor size, it would still count.

    Solving some problems can be improved with near perfect scaling by more transistors, other problems quickly experience diminishing returns. This makes the case for the necessity of faster transistors. Many problems are not optimized in the way they are being solved today, and still has a lot of potential there. Moore's law alone is not the fate of the computational industry, although it has been the driver of a gloden age.
  • Jacmert - Wednesday, May 18, 2011 - link

    - "Also is anyone else nervous about 3 gate transistors?"

    What I'm worried about is the 4 gate push. We'll probably see less of that now, though, since Blizzard nerfed the warp gate research time in the last patch. #starcraft2
  • Ammaross - Wednesday, May 18, 2011 - link

    - "Also is anyone else nervous about 3 gate transistors?"

    More "gates" = more accurate signaling at higher speeds or lower voltages. The more accurate or low power a specific set of transistors need to be, the more gates they'll put (up to 5 was mentioned...). The usual is likely going to float in the 2-3 range.
  • L. - Thursday, May 19, 2011 - link

    Good .. I'll go back to Queen rushing then ;)
  • dgz - Tuesday, May 24, 2011 - link

    You just won the Internet. Thanks for the laugh.
  • L. - Thursday, May 19, 2011 - link

    nervous ? what for ?

    Anyway, good pressure from Intel .. let's see if the other fabs can compete xD

    Because otherwise we might see some more "Sandy Bridge vs Phenom II" slaughterfests .. interesting.
  • fri2219 - Tuesday, May 17, 2011 - link

    Investor: Did you guys miss the boat on low power chips? It seems like the growth sectors in general and mobile computing applications are being staked out by your competitors.

    Intel: "HEY LOOK AT THIS OVER HERE, IT'S A BALLOON ANIMAL!"

    I have no doubt that this time they'll get low power right, just like they've done with Parallel Processing, Graphics, Networking, and Memory.

    P.S. The Cubs are GOING TO WIN THE WORLD SERIES!!!!! (In 2014)
  • arse_luvr - Tuesday, May 17, 2011 - link

    Ha! Comment of the day. You sir, have hit the nail on the head. 10/10
  • KMJ1111 - Wednesday, May 18, 2011 - link

    Just because they don't list the power levels in the first slide and then extend the SOC to about 8W on another doesn't mean it they aren't getting the lower power level right. Your phone just gets the added feature of being a hand warmer for as long as your battery will last...imagine all the cold regions you could sell it in! I mean their first phone is with Finnish Nokia, right?

    Cool that they are lowering the power for other chips though! I'll be loving a future laptop for all day working and commuting! The slim design if it was really that thick would be awesome!
  • ebolamonkey3 - Wednesday, May 18, 2011 - link

    If I could +rep this post, I would!
  • TeXWiller - Wednesday, May 18, 2011 - link

    Instead of balloons they are probably talking about air gaps at 14 nm process node..
  • umbrel - Wednesday, May 18, 2011 - link

    I get your point, but as much as any entusiast hates Intel's performancein those areas (Parallel Processing, Graphics, Networking, and Memory), if they do in smartphones as "bad" as they have done there (commercially), they might just get that 50% market share they are promising for 2016.
  • wumpus - Friday, May 20, 2011 - link

    Just how often has intel made a penny in anything other than x86 desktops and x86 servers? The push into servers was probably intel's last profitable side adventure, but largely started out as renaming existing chips with a different brand and charging heaps of money for it.

    They can afford to fail often. They do cut back when they fail too often.
  • GullLars - Tuesday, May 24, 2011 - link

    They were really successfull with x25-M and x25-E (gen 1). Too bad the market wasn't ready at the NAND cost pr GB back then. And since then they completely dropped the ball. The 510 is a re-brand, and a year late. If they had bought Fusion-IO back in 08-09 they could have had a field day in the server market about now. Efficient high-performance storage is becomming a multi-billion dollar industry, and is trending to go far into double digits in this decade. Data mining, relational databases and dynamic services are growing fast.
  • SteelCity1981 - Wednesday, May 18, 2011 - link

    Intel has been blind sided by ARM. I guess Intels cocky attitude has got the better of them and now all of a sudden they are feeling the heat from ARM and even AMD in the Micro device sector.
  • Stuka87 - Wednesday, May 18, 2011 - link

    Well Intel sat around for years with basically no changes to the Atom. Yes they added a dual core model, and hyper threading, but neither of those are architectural changes.

    This gave ARM and AMD time to develop their own products to compete in the same market. They now have a head start and Intel has to play catch up. But with as much money as Intel has, I doubt it will take them long to catch back up.
  • DanNeely - Wednesday, May 18, 2011 - link

    Intel needed process shrinks to get the atom's die size small enough to play in the smart phone arena. Architecture changes that would make it larger again would have been counter productive.

    For comparison the Tegra 2 (only dual core a9 I could find numbers on) is 260M, with the cpus only taking 10% of the total. The original single core atoms were 47M transistors for what was basically just the CPU, D510 atom is 176m transistors and presumably uses roughly half its transistors on the CPU cores. I couldn't find a transistor count for the NM10 southbridge, but assuming the transistor count is proportional to the package size (weak assumption but I don't have anything better) it would have about 115M transistors. Granted not all of them are for things needed in a smartphone/tablet OTOH it also lacks some things one would need; but at 45nm the atom platform barely has transistor parity and has significantly less transistors to use for things like the GPU.

    It won't be until the 32nm shrink that Intel will be able to get enough transistors to catch up on the GPU size. Coming from the other direction, the Cortex A15 is a much more complex design and will presumably be significantly larger, reducing Intels dependence on the process advantage/raw CPU power to be competitive.
  • L. - Thursday, May 19, 2011 - link

    Well you're wrong to doubt.

    How long has Intel tried to make gfx ?
    How far are they ?

    Quite a while and nowhere are the answers ...

    And there are quite a few markets Intel is not going to win anytime soon, and I suspect they will lose the SSD market within the next five years aswell, they're behind on controller tech and they're clearly not the only ones with high quality NAND fabs.

    The reason Intel would remain on top would be that we know they can redo what they did in the Athlon era: playing dirty, and doing it damn well.

    That Intel was able to pass through an era of more than 3 years of technological inferiority without suffering any heavy market losses is the proof they are financially solid - that doesn't help with the tech though, even if the core era has been quite strong since.
  • JasonInofuentes - Sunday, May 22, 2011 - link

    While their graphics efforts (more on this and a few other things later) haven't yielded a supremely viable product they have improved an awful lot while lowering their power envelope. Further, they are willing, and this is something they should get a lot of respect for, to put their research to good use no matter what it's original intent. Larrabee was introduced as their move towards high end graphics, and while it was canceled as a stand alone product it is still a big focus in their efforts to compete in HPC. As we know now, Conroe was the result of an exercise in developing mobile chips, a move they made well ahead of the markets move towards laptops as the workhorses of our tech ecology.

    Think of it this way, Intel squeezes something out of every bit of research they do, so even if their SoC's don't compete with ARM, they will use the technology to move the goal line forward somewhere.
  • j108b - Wednesday, May 18, 2011 - link

    Now I just hope that Intel will give real confirmation if Ivy Bridge (s1155) will work on existing chipsets.
  • L. - Thursday, May 19, 2011 - link

    Intel + long-term platform = boom.
  • dragosmp - Wednesday, May 18, 2011 - link

    About:

    "The basis for Fast Flash Standby is that while going into sleep is fast, it requires leaving the RAM powered up to hold its contents, which is why sleep is only good for a few days of standby versus weeks for hibernation."

    I agree with your assessment about "Sleep", but to you say "Hibernation" holds for a few weeks is wrong. Hibernation = Shut Down from the battery life perspective; a computer can stay in Hibernation mode indefinitely and it would not consume any more battery than if it were Shut down. One can dismantle a rig, rebuild it and the rig will resume from Hibernation. I have done with mine a few years ago: dismantled, packed, flown by plane, rebuilt and it resumed exactly where it left off.

    I am writing this post as some users seem to think Hibernation is some sort of deeper-Sleep and quantifying the battery life under Hibernation vs in Sleep mode only supports this misconception. Intel's tech is of doubtful usage as now if a computer has an SSD as OS drive and Hibernation mode enabled then this Fast Flash Standby already exists.
  • Ryan Smith - Wednesday, May 18, 2011 - link

    I don't disagree with you (Jarred could say more, this is his specialty). However those numbers specifically came from Intel.
  • L. - Thursday, May 19, 2011 - link

    Quite simply Ryan, hibernation is this :

    -> Write all RAM contents to hiberfil.sys
    -> Force Shutdown

    Then when you go out of it :

    -> Load OS
    -> Load back all RAM

    And there you go.

    Because of that, you can hibernate as long as your storage device (hdd or ssd or anything you'd like to use) can live, quite a lot yes.

    Intel can say any random shit they want, they did not write the windows hibernation thingy, so the only real info you can get is @ microsoft.
  • JarredWalton - Thursday, May 19, 2011 - link

    Sadly, "weeks of hibernation" is probably an apt description for most laptops. I've had a lot of laptops where the battery will discharge slowly just sitting on my desk, to where it will be pretty much dead after a month (and certainly after two). Sure, hibernation will still resume from where you left off, but the battery drain is irritating. Use better battery technology already, OEMs!

    Other than that, you're correct.
  • L. - Thursday, May 19, 2011 - link

    Are you talking about permanently plugged in laptops ?
    Because my boss killed his n900 battery just like that.. and his laptop's too I think ...
  • JarredWalton - Thursday, May 19, 2011 - link

    No, I'm talking about low quality Li batteries that just don't hold a charge. I've seen it with Dell, HP, Acer, Toshiba, Sony, etc. batteries. Simply put, all batteries are *not* created equal. Most of the business stuff, not surprisingly, is better than the consumer stuff. I just wish I could have a laptop battery that would hold 80% of its charge if left on the shelf for a year -- and I do have some Eneloop AA rechargeable batteries that will do exactly that.
  • Interested Novice - Wednesday, May 18, 2011 - link

    Do you mean 'of doubtful usage' if you already have an SSD in your PC? Obviously most smartphone and tablet users have an SSD but I expect the number of PC users that already have SSD-capability is very small. This seems like the best of both-worlds - a hybrid storage strategy that lets you enjoy the best of solid-state and HDD storage. That said I think the solution will be a commodity instead of something that Intel can differentiate, at least outside of the short-term.
  • GullLars - Tuesday, May 24, 2011 - link

    "I expect the number of PC users that _already_ have SSD-capability is very small"
    Already? I've had SSD for 3 years now. Most of my close family and friends for about 2 years. Everyone i game with online has had for at least a year. I suspect the line is drawn where people know that there is a storage device inside their computer, and it's not just magically a part of the box with the power button.
    That said, the number of people who don't think this could be concidered small. I think about 10-15% of users. This will change as major laptop manufacturers start including SSDs as boot drives as default sometime in the next 1-3 years.
  • L. - Thursday, May 19, 2011 - link

    I believe you are correct.
    Furthermore, if said disk is an SF-based disk, there's quite a chance it'll kill the Intel solution by a fair margin, there's usually a lot of replicated data within RAM (i.e. it's uncompressed).
  • Lucian Armasu - Wednesday, May 18, 2011 - link

    Intel is just trying to capture some mindshare by announcing chips years ahead of launch. I don't think that will work. ARM is just one processing node behind Intel, so ARM chips will be at 28nm in 2012, and at 20nm in 2014.
  • L. - Thursday, May 19, 2011 - link

    Yup, and the fact is, at the same node, both AMD and ARM kill Intel single handedly.

    They'll need a big hit like the core architecture, but can they do it in low power and stuff.... I doubt it, they do NOT have GPU's, and they will have to either partner w/ nVidia or other people in order to compensate.

    Not part of their slides (quite normal, that would change stock ratings) but it's their only way out.
  • DanNeely - Wednesday, May 18, 2011 - link

    The old rule of thumb was always that an architecture was good for a bit under a 10x spread in power levels with the outer edges being somewhat ragged, like Intel's old extreme processors that were only marginally faster than the next lower powered bin or the ULV processors that took a massive performance hit vs the LV ones in the same family.

    I'm wondering if that's still true now as we continue to move to a many core model. If you design a core with a sweet spot in the 3-10W range, you can get 15W mobile quad cores for the low end, and 120W 12 core processors for high end servers out of the same architecture.

    My suspicion is that a large part of how Intel intends to get power numbers for their mobile parts down to half their current range is going to be by making the IB mobile chips 22nm die shrinks of the current ones, and not bumping the core counts at each price point; and then repeating the no core count increases when Haswell launches.
  • L. - Thursday, May 19, 2011 - link

    What an Insight ... lol
    Just kidding man, but have you seen the latest from AMD, and the next latest ? ^^
  • dealcorn - Wednesday, May 18, 2011 - link

    Where did you get the slide of "Medfield" comparative Power Consumption? Page 10 of Anand Chandrasekher's Investor Meeting pdf has the same data but identifies it as Moorestown.
  • Ryan Smith - Wednesday, May 18, 2011 - link

    That slide came from the following deck, which was the architecture group's presentation.

    http://intelstudios.edgesuite.net/im/2011/pdf/2011...
  • dealcorn - Wednesday, May 18, 2011 - link

    OK, 2010 IM PDF Chandrasekher presents the Power Consumption data as Moorestown and it is based on a combination of measurements and targets. 2011 IM PDF Perlmutter presents the identical data but calls it Medfield and it is based on estimates. Either the Moorestown data was off or we are being sandbagged as 32nm will be more efficient than 45nm.
  • StormyParis - Wednesday, May 18, 2011 - link

    :I always love those slides that compare today's products to tomorrow's. Assuming ARM stands still, Intel will rule the roost.

    It's funny how Intel have a clear lead in process technology, which allows them to keep their x86 lead, but have pretty much failed at everything else (network, wireless, itanium, graphics...).

    It'll be fun to see if/how they can compete with ARM. I'm not really holding my breath, seeing how see still can't really do integrated graphics right, ad have been for years hyping stuff that never materializes.
  • umbrel - Wednesday, May 18, 2011 - link

    " but have pretty much failed at everything else (network, wireless, itanium, graphics...).

    It'll be fun to see if/how they can compete with ARM. I'm not really holding my breath, seeing how see still can't really do integrated graphics right"

    If they compete with ARM the same way they have done in integrated graphics they will have 50% marketshare, you know that right?

    Not that I disagree with your complaint with their performance, but as a company they have comercially succeded is all those areas:
    Network: I don't know... do you mean Ethernet? Routers? Cloud services?
    Wireless: first ones to deploy 3.5G (wimax)... probably will loose against LTE though; I think they have the most market share for wi-fi; does having the wireless chip in the iphone counts as Intel's win? technically that was Infineon's.
    Itanium: loosing against their own x86... still managed to survive Cray and Sun, and IBM is still there but I haven't heard much besides Watson.
    Graphics: Intel has 50% market share... the cheaper 50% though
  • i_just_took_a_crap - Wednesday, May 18, 2011 - link

    "they compete with ARM the same way they have done in integrated graphics they will have 50% marketshare, you know that right?"

    You've missed the mark as much as humanly possible. Intel's IGP marketshare has less to do with a competitive product, and more to do with their CPU monopoly. Nobody wants or needs Intel-anything in their phone, and you can bet any design wins Atom gets will be heavily "subsidized" by Intel.

    You're also way off about Itanium, it's a fair bet to say you're not an IT professional. Those are all mainframe CPUs, which are used to run various mission-critical systems that normal people never get to see up close (ATMs, highly important/sensitive database apps, etc...). Itanium's performance sucks, it has no advantage over IBMs or Sun's mainframes, and the die size is substantially larger than the biggest Nvidia GPU.
  • L. - Thursday, May 19, 2011 - link

    Boom.
    Nice one.
  • L. - Thursday, May 19, 2011 - link

    Well, last time I wanted to look for a wireless card, there was an intel mini pcie @ 13 bucks doing bgn .. can't say I wasn't pleased.

    Itanium .. lol, they were lucky Oracle partnered on that shit.

    Intel should buy nVidia and that's it .. maybe Huang is just waiting for more market share to buy Intel, or be bought at twice the price .. who knows.
  • SimpleLance - Wednesday, May 18, 2011 - link

    Intel talks in watts, whereas ARM talks in miliwatts.

    The only advantage that Intel has today is in fabrication.
  • JasonInofuentes - Sunday, May 22, 2011 - link

    I will mention that this same point was made during the architecture group, possibly during the Q&A. You can talk in Watts or milliwatts, but performance per watt/milliwatt is where Intel wants to compete. And I'm really fascinated by whether their efforts will pay off. Certainly it's fair to say that as ARM performance increases its power envelope has increased as well, even if it's still counting in milliwatts.

    I guess the message I'm trying to evangelize is: Keep an open mind, and always be ready to be impressed, otherwise you're not going to have as much fun.
  • mosu - Thursday, May 19, 2011 - link

    Why Intel is feeding us with crappy Atoms when they can do better?That's why I don't like Intel, no DX11, no decent sound, lots of limitations and promises of future chips but in fact they are waiting for others to produce something they can "upgrade" later.
  • vision33r - Thursday, May 19, 2011 - link

    Outside of netbooks, I can't think any tablet or smartphone maker willing to invest on Intel CISC based designs.

    Android, iOS, and Windows Phone 7 all runs on ARM which are RISC based chips. Nobody is gonna go CISC when these OSes and apps are glued to RISC.

    Intel made a major fail by pushing their designs too far and not make any painful push to RISC. Eventually the PC market will slow down because most companies and people don't need that much of an upgrade.

    Don't tell me my grandma needs another Core i7 just for web surfing. Business customers do not need high end CPUs only servers do.
  • L. - Thursday, May 19, 2011 - link

    Even Servers don't "need" high-end CPU's, it's just much easier to cram as many cores as possible on the same die. In the end, what was a CPU has become a core and it's still roughly at the P4-performance level -- with some nice improvements alright.

    AMD is a good example of the future of all that stuff:
    Llano 1 bullmodule (don't know the real name or if amd has planned a 1 module llano but w/e they'll have to deactivate some modules and sell Xwhatever versions)
    Llano 2 bullmodules
    Bulldozer 4 bullmodules
    Opterozer 6-a lot bullmodules

Log in

Don't have an account? Sign up now