Comments Locked

37 Comments

Back to Article

  • mgl888 - Tuesday, March 29, 2016 - link

    Has latency improved significantly for DDR5?
  • davidorti - Tuesday, March 29, 2016 - link

    Hi, just an innocent question: Why didn't you include HBM in the "GPU Memory Math" table? They crush memory bandwith per watt http://www.anandtech.com/show/9390/the-amd-radeon-...
  • bug77 - Tuesday, March 29, 2016 - link

    And yet, the Fury X only uses 10W less than a 980Ti (Fury X only has 4GB VRAM, 980Ti has 6): http://www.techpowerup.com/reviews/Sapphire/R9_Fur...

    Don't get too hyped over HBM. It will replace GDDR in a few years, but right now it's just an engineering wonder (as in, no real-life benefits).
  • Drumsticks - Tuesday, March 29, 2016 - link

    That isn't a comparison purely of HBM vs GDDR5 though. The architectural disadvantages of GCN in performance per watt vs Maxwell are pretty well known. HBM definitely provides some pretty reasonable power savings.
  • Yojimbo - Tuesday, March 29, 2016 - link

    Why confuse the issue by comparing two different GPU architectures? HBM has real-life benefits.
  • bug77 - Wednesday, March 30, 2016 - link

    Well, the video cards using HBM today don't use less power than something comparable and they don't really break performance records either. But yeah, they have real-world benefits </sarcasm>

    HBM does better in GB/s/W. In raw power draw it doesn't do so well: http://www.anandtech.com/show/9266/amd-hbm-deep-di...
    As you can see it is estimated 4GB HBM will draw about half the power of 4GB GDDR5. But that's only a 15W reduction when the complete video card draws over 200W on average when gaming. It's still an improvement, but it's not earth-shattering. The added bandwidth is where the real advantage of HBM lies, but first we need GPUs that can put that bandwidth to good use. The current Fury X barely manages to inch ahead of 980Ti at 4k, without being actually playable at that resolution.
  • BurntMyBacon - Wednesday, March 30, 2016 - link

    @bug77: "And yet, the Fury X only uses 10W less than a 980Ti (Fury X only has 4GB VRAM, 980Ti has 6):"

    The link you gave only compares full card power. AMD already stated that they used the extra power savings to push performance further on the chip itself. The fact that it comes in any lower in power at all is a minor miracle given that the Maxwell architecture is undeniable more power efficient (at least for gaming) than its GCN counterparts. Using your link as a point of reference see the 290X (I.E. 390X w/4GB ram). It is a smaller, lesser part also based on GCN, yet has higher overall power consumption with the same amount of ram. That's before you consider the sizable increase in memory bandwidth. I'd say there are real-life benefits to be had.

    To your point, the fact that the largest estimated power draw on the chart is only 31.5W should tell you that you shouldn't expect miracles in overall power consumption. You can at most (theoretically, not realistically) drop what amounts to something like 10% of the cards power consumption. Cost and bandwidth will be larger considerations for the moment. With GDDR5X alleviating the bandwidth concerns for all but the highest end cards, cost is going to be the major consideration of the day. Eventually, the cost of HBM will come down (mid term), bandwidth needs will go up (long term), and you'll see HBM in larger chunks of the market. Your few years estimate probably isn't far off.
  • extide - Tuesday, March 29, 2016 - link

    Probably because the table was pretty big as-is.
  • DanNeely - Tuesday, March 29, 2016 - link

    GDDR5X is the evolutionary step that is expected to quickly replace GDDR5 on everything from mid range cards to sub-flagship models. HBM/2 will remain limited to flagship cards and possibly compact/mobile models in the current generation; and will probably need a few years to drop sufficiently in cost to displace GDDR5X across the entire product stack. On the low end we'll probably see DDR4 replace DDR3 over the next year or so - the cost gap is nearly gone and because those GPUs tend to be bandwidth starved DDR4 should offer a nice boost - however I wouldn't expect to see any action there until after the higher end cards are out. It's possible HBM2 may eventually come to these cards as well - AMDs rumored plans to put HMB2 on future APUs suggests that they think costs will fall enough to make it possible - but sub-$100 cards are so low margin that the interposer would have to drop an order of magnitude in price first; adding $30ish component is out of the question on cards where the margin is only a dollar or two.
  • Lolimaster - Tuesday, March 29, 2016 - link

    I think sub $100 gpu's will disappear or more precisely, only "legacy/moba" for older machines will be sold below that range point, DDR4 should suffice.

    Even the most basic Kaveri APU features an igpu on the same class of $40-50 discrete one. If you want a new low cost machine will run off an APU, there's no point on buying a low end dgpu.
  • Khenglish - Tuesday, March 29, 2016 - link

    This is my thoughts too. I honestly don't see the hbm costs ever coming down much. It will always need an interposer. The interposer is a slab of silicon with just the interconnect stack and no logic. This is something fabs already know how to make very well, so if it's expensive to make now, it will stay expensive. I see hbm staying as the top end only memory solution being used on what today are 384-bit and 512-bit cards, with gddr5x taking the spot of what today is gddr5 128-bit to 256-bit cards.
  • BurntMyBacon - Wednesday, March 30, 2016 - link

    @Khenglish: "I honestly don't see the hbm costs ever coming down much. It will always need an interposer. The interposer is a slab of silicon with just the interconnect stack and no logic. This is something fabs already know how to make very well, so if it's expensive to make now, it will stay expensive."

    Five points of interest. 1) The interposer size is currently at the max limit that the reticle can handle on the current fabrication process. 2) There isn't a large demand for it yet so economies of scale haven't kicked in. 3) They are still looking for return on investment to cover research costs. 4) High end "premium" items often carry an extra "premium tax" that doesn't follow it to commodity items. 5) Smaller chips can use smaller interposers.

    I think there is room in there for the price to drop some. A little more of the price will be hidden by the savings from less complicated board layout and fabrication. The rest of the cost will need to be justified by performance. Cost may never get low enough for the lowest end discrete cards, but it is also uncertain whether there will be much of a market for sub-$100 cards for much longer given IGP progression. What market remains will not likely be looking for the bandwidth of HBM anyways. The bigger question on my mind is whether (or how soon) HBM will become cost effective for the mainstream market.
  • Azix - Wednesday, March 30, 2016 - link

    wasn't the interposer supposed to cost something like $4?
  • beginner99 - Wednesday, March 30, 2016 - link

    Yeah in that area. Of course depends on the exact size. But it's far from $30. much cheaper.
  • ltcommanderdata - Tuesday, March 29, 2016 - link

    GDDR5X seems like a good candidate for the PS4K to increase bandwidth without widening the bus.
  • Lolimaster - Tuesday, March 29, 2016 - link

    PS4 needs gpu power, not only bandwidth to delive 4k properly.
  • andrewaggb - Wednesday, March 30, 2016 - link

    I think gddr5x+14nm die shrink+more gcn cores might allow the PS4 to run games in 1080p at 60 fps. You could probably run simple games in 4k, and certainly the user interface and netflix and whatnot in 4k.

    I think gaming in 4k on a console is years away.
  • III-V - Tuesday, March 29, 2016 - link

    The efficiency boost is impressive. It's a shame this didn't come to market sooner, but it'll be great to fill the gap between DDR and HBM-equipped GPUs.
  • haukionkannel - Tuesday, March 29, 2016 - link

    So low end will use ddr as before.
    Middle range will use gddr5 as before
    highend may use gddr5 or gddr5+
    superhighend may use gddr5, gddr5+ or HBM...
  • DanNeely - Tuesday, March 29, 2016 - link

    Except on rebadged cards (where sadly GDDR5 may linger for a few years), GDDR5X will probably displace GDDR5 over a single generation; it's very close to being a dropin replacement and the higher bandwidth per chip should allow midrange cards to drop in price due to simpler PCBs allowed by using narrower buses.
  • valinor89 - Tuesday, March 29, 2016 - link

    Do you think that with the impending node shrink they will keep rebadged cards like they have been doing now for years to come?
    I would like to think that once high production of the new nodes are common they will try to launch new gpu even, specially, at the low end if they can shrink actual silicon. Most low end gpu we have now are geting very old.
  • DanNeely - Tuesday, March 29, 2016 - link

    If the performance gain is large enough it might accelerate things (especially on mobile); but it's going to be a while before 14nm becomes cheaper/transistor than 28nm is. AMD's mid-range rebadges will probably go in reasonably quick order; but low end desktop parts have never been about performance/watt, only performance/dollar.
  • DanNeely - Tuesday, March 29, 2016 - link

    Actually the very bottom is more about absolute dollar amounts than even just bang for the buck.
  • MrCommunistGen - Tuesday, March 29, 2016 - link

    To some degree they always need to liquidate existing stock of old GPU dies. Of course with 28nm having been around for so long I'd doubt anything 40nm or larger falls into this category...

    You've probably also got issues with Fab capacity. They prefer to concentrate on the higher margin enthusiast class gear before trickling down to the new manufacturing nodes... like DanNeely mentions (cost/transistor).
  • DanNeely - Wednesday, March 30, 2016 - link

    The last 40nm rebadges appear to've been done about two years ago (GT705/730 and R5 210-235X); so those at least appear to finally be dead. (Which means the fab capacity is probably switched over to making boring commodity parts for a few more years before finally being scrapped.)
  • testbug00 - Tuesday, March 29, 2016 - link

    A reminder:
    The real reason HBM saves so much power over GDDR5 is on the non-DRAM side where the power draw is significantly lower.
  • DanNeely - Tuesday, March 29, 2016 - link

    The savings is in IO power, courtesy of shorter links and lower clock speeds per channel. The power needed to operate the dram hasn't changed; and will end up blowing up in a few more generations. Hopefully they're be able to avoid disaster again with a new memory type again like how HBM is buying us a few years beyond when GDDR* would hit the wall.

    http://cdn.wccftech.com/wp-content/uploads/2015/11...
  • ImSpartacus - Tuesday, March 29, 2016 - link

    Looks like Polaris 10 will able to get its requisite Hawaii-class bandwidth on a 256-bit bus with only 10Gb/s chips. Glad that's coming along.
  • Hul8 - Tuesday, March 29, 2016 - link

    > "The first samples of GDDR5X memory chips fully leverage key architectural enhancements of the specification, including quad data rate (QDR) data signaling technology that doubles the amount of data transferred per cycle over the memory bus (compared to GDDR5)"

    GDDR5 is already QDR...
  • iwod - Wednesday, March 30, 2016 - link

    You want to save cost with 128Bit Memory Controller, but starting at 4GB doesn't seems right for what is suppose to be entry level discreet graphics. Mid Range would be 256bit, High End should be all HBM.

    The difference between a discreet GPU and iGPU is still very large. But with the continue shrinking of PC shipment, the market is likely to be polarized, with one side that get enough from iGPU, and other side are mostly High / Mid End GPU.

    Sometimes I really wish GPU manufacturer could break out some numbers, would be interesting to look at.
  • Hul8 - Wednesday, March 30, 2016 - link

    This article is only about the 8 Gbit chips becoming available. The fact that there is only one chip size limits the memory capacity/bus width combinations to the ones listed in the article: 4 GB at 128-bit (4 x 32 bit chips) or 8 GB at 256-bit (8 x 32 bit chips).

    Once we see other chip sizes, other capacity/bus combinations become available.
  • Hul8 - Wednesday, March 30, 2016 - link

    (Unless of course they decide to add 6 GB at 192 bit using 6 chips or something along those lines.)

    (AnandTech needs comment editing badly.)
  • Mugur - Wednesday, March 30, 2016 - link

    I can hardly wait for a midrange 8GB 390 like Radeon with 14nm and GDDR5X...
  • Eden-K121D - Wednesday, June 15, 2016 - link

    YOur wait is over. If you are reading this comment
  • LemmingOverlord - Thursday, March 31, 2016 - link

    Be it as it may, the news here is that they are sampling. The specs on these parts have been known for quite a long time (>9 months) on the Micron website. Their part numbering guide was amply quoted on other websites. The (far more) detailed document is here: https://www.micron.com/resource-details/b023c2f3-6...
  • LemmingOverlord - Thursday, March 31, 2016 - link

    and here's the datasheet on that specific part: https://www.micron.com/~/media/documents/products/...
  • xrror - Saturday, April 2, 2016 - link

    "largely based on the GDDR5 specification, but has three crucial improvements"

    I see what you did there ;)

    (the joke is Micron == Crucial)

Log in

Don't have an account? Sign up now