Comments Locked

21 Comments

Back to Article

  • PeachNCream - Thursday, June 21, 2018 - link

    Mass production was so kicked off that Samsung is using CGI renders of products instead of photographs.
  • jordanclock - Thursday, June 21, 2018 - link

    Would it make a difference? Renders are better for marketing. They can fit more "Samsung" logos on all the parts.
  • PeachNCream - Thursday, June 21, 2018 - link

    It's sort of like buying a cucumber at a grocery store. It's nice to be able to touch/poke/caress said cucumber before stuffing it in...your cart...to buy it.
  • FullmetalTitan - Thursday, June 21, 2018 - link

    Realistically they just want to hide component details for the DRAM package and controller until clients have these in hand for a bit of use.
  • edzieba - Thursday, June 21, 2018 - link

    Assuming the proposed NGSFF spec is adopted, NGSFF is backward compatible with m.2 electrically and physically in terms of the connector itself (drive dimensions are wider and a little thicket than m.2 2210). Though these drives will probably be outside the price-range of anyone using m.2 on desktop.
  • Santoval - Friday, March 15, 2019 - link

    Wait, NGSFF/NF1 employs the M.2 connector but is *thicker*? How is that possible? I get the "wider" part (which is self-explanatory from the renders) but how can it be thicker and backwards compatible? Does the PCB have the same thickness but the total thickness (for dies, capacitors etc) is relaxed compared to M.2?
  • iwod - Thursday, June 21, 2018 - link

    I was rather hoping we get 6GB/s + with PCI-E 4
  • jordanclock - Thursday, June 21, 2018 - link

    The interface isn't the bottleneck, the NAND+controller is.
  • Gothmoth - Thursday, June 21, 2018 - link

    but it has 12 GB buffer... so at least for a second or two it should have higher max burst speed.
  • shabby - Thursday, June 21, 2018 - link

    Its and evo 970 under the hood.
  • nevcairiel - Thursday, June 21, 2018 - link

    PCIe 4 should effectively double the bandwidth of PCIe 3, ie. getting close to 8GB/s on a 4x link. However the NAND and/or the controller are probably not up for those speeds quite yet.

    Which begs the question why even use a PCIe 4 controller on this if it can't even saturate a 4x PCIe 3 link? Or maybe it usea a PCIe 4 2x link only?
  • DanNeely - Thursday, June 21, 2018 - link

    At the data center level, just the increased power efficiency could be sufficient justification even if performance is unchanged.

    The switch from HDD to SSD servers massively moved the capacity bottleneck in most data centers to being thermally/power limited instead of rack space limited. Anything to reduce power per server will feed directly back into being able to fit more servers into your existing footprint.
  • CheapSushi - Thursday, June 21, 2018 - link

    Might not be 100% accurate info. But I think it's more likely using less lanes, like you mention. This is one of the goals with PCIe 5.0 at least, in terms of making x1 lanes enough for mass NVMe storage. So it could be a way of allowing mass storage on 4.0. The PLX switches are expensive, so it could be they're trying to find a balance. Check out SuperMicro for server examples with all M.3/NF1 1U.
  • lightningz71 - Monday, June 25, 2018 - link

    You've hit the nail on the head here. For 72 of these units to be in a server, that's 288 lanes that need to be routed all over the place. Switch to PCIe gen 5, and that's a quarter of the number of lanes and a drastic reduction in power usage. Even with gen4, that's 144 lanes and still a significant power level reduction. For gen5, an EPYC based server wouldn't even need a PCIe multiplexer, assuming that lane bifurcation is granular enough. That's, of course, a pipe dream as doing 1x lane bifurcation from the processor would be an extraordinarily expensive affair from a circuit standpoint.

    Realistically, what you'll see if a fat data connection to the processor from a single multiplexer chip that's feeding 2X Gen 4 channels to each slot in the next generation of the product.
  • Dragonstongue - Thursday, June 21, 2018 - link

    seems "nice" but why is the random write IOPS so low compared to many other SSD that are out there and likely cost a fraction of what this probably will?

    the TBW however is quite good 1423.5 per TB is a "class leader" as far as I can tell, by quite a margin
  • Death666Angel - Thursday, June 21, 2018 - link

    Probably a limitation of the controller. That is a hell of a lot of NAND chips to manage. And as I read the article, it is more of a "read the data off it and use it to compute stuff". So I would guess it fits the use case.
  • Kristian Vättö - Thursday, June 21, 2018 - link

    Enterprise SSDs specs are based on steady-state performance, not 2 second bursts like client specs.
  • FunBunny2 - Thursday, June 21, 2018 - link

    does the 10X difference in random strike anyone else as regressive?
  • FullmetalTitan - Thursday, June 21, 2018 - link

    Not for the use case these are intended for. These will be for large data sets and heavy database computation, the read:write ratio is heavily skewed.
  • Gothmoth - Thursday, June 21, 2018 - link

    only 3000mb/s with 12GB of ram cache .. did i read that right? seems to be time for PCI 6.0.....
  • lilmoe - Thursday, June 21, 2018 - link

    I'm REALLY trying to whitelist this site to support them in bringing in more content. But with 600+ request on my network tab in devtools AFTER the browser trying to block all of the junk your ads and trackers are trying to pull, I'm sorry.

    That's just unacceptable. Do you guys even whitelist your own site? Your reader base is fairly tech savvy. Do you think they'll whitelist the site? There just has to be another way to help us support you guys. What a shame.

Log in

Don't have an account? Sign up now