Comments Locked

28 Comments

Back to Article

  • HollyDOL - Friday, November 22, 2019 - link

    Looking on the loose pair of 8pin PCIe connectors on topmost photo it seems one card was borrowed for review by somebody :-)
  • Kevin G - Friday, November 22, 2019 - link

    I wonder if we'll get a Titan VS then.
  • imaheadcase - Friday, November 22, 2019 - link

    What is that foam insert behind the cards about? It almost looks like shipping material they forgot to remove..
  • Qasar - Friday, November 22, 2019 - link

    what foam ??
  • mode_13h - Saturday, November 23, 2019 - link

    The foam is a strip between the motherboard and the ends of the cards.

    What strikes me about that pic is how little room there is for airflow between the cards.
  • extide - Saturday, November 23, 2019 - link

    In a server the air flows through the cards lengthwise -- you can see the front of the nearest one -- these cards don't even have fans on them, they rely on the fans in the server which are directly behind them, to blow air through them and out the back of the case. That way they can be packed right up next to each other with no issue.

    The foam looks like a little bit of additional support, they are probably fairly heavy.
  • Valantar - Saturday, November 23, 2019 - link

    Wouldn't it be logical for the foam also being to ensure that as much air as possible goes into the cooler rather than passing underneath the GPU?
  • extide - Saturday, November 23, 2019 - link

    Well the only way for the air to get out the back is probably to go through the GPU's.
  • imaheadcase - Saturday, November 23, 2019 - link

    Yah, seems kinda odd is all to see. Even seems like airflow blocking. seems like the PCI slot should of been more than enough..considering the monster cards DYI do at home with default slots.
  • DaveLT - Saturday, November 30, 2019 - link

    The foam is there to stop each other from touching another. Most quadros meant for servers dont have fans but are much better off having the chassis fans of the servers forcing air into them.
    Those 120mm fans at speed are fantastically powerful!
    Blower coolers won't be able to cool the Teslas.
  • UltraWide - Friday, November 22, 2019 - link

    just stabilizer padding.
  • mode_13h - Saturday, November 23, 2019 - link

    So, the V100 came out just 1 year after the P100. Now, it's been with us for about 2.5 years... what's up with that? I expected some big announcement...

    I guess they're trying really hard to let AMD catch up. Maybe when AMD announces Arcturus, that's when we'll finally hear about Nvidia's next datacenter chip (note: I didn't say GPU).
  • extide - Saturday, November 23, 2019 - link

    I mean you almost gotta wonder if Nvidia got tripped up somehow ... You would think they'd have a 7nm line out by now, but no they did the Turing Super refreshes instead. I mean they are probably humming along just fine but ... it ALMOST seems a little fishy.
  • Santoval - Saturday, November 23, 2019 - link

    It's the same as Intel when they had no competition; Nvidia is well ahead of AMD, particularly at the top end consumer and the professional market. When you are the market leader you have little incentive to innovate and/or switch to a cutting edge process node. Of course that's how market leaders stop being market leaders, but when they realize that it's already too late.

    Nvidia *will* innovate (eventually), with Ampere next year. By that time AMD will have released RDNA2 based graphics cards, though as of yet it's unknown if they will be able to surpass Nvidia. They probably won't, not even in ray-tracing. It also depends on whether Samsung's 7nm process node (that Ampere will be fabbed with) will turn out to be better or worse than TSMC's 7nm+ process node.
  • Morawka - Sunday, November 24, 2019 - link

    perhaps Nvidia can't get access to TSMC's 7nm line in the volumes they need. I'm always reading about Apple, Huawei and Samsung eating up all the capacity.
  • rahvin - Monday, November 25, 2019 - link

    It's more likely cost, Nvidia as one of TSMC's first big clients had preferential access to new processes and even a deal for the first 5000 wafers being free. That deal expired about 2 years ago.

    With AMD moving to TSMC and others I'm willing to bet nVidia wouldn't have been able to afford the move to 7nm due to the margin impact, and with them being ahead made a strategic decision to stay on the older process and make more money. The last 3 quarters or so have saw them boost their margins about 5% (probably at least part of that came from holding off on 7nm).

    But if AMD offers competition at the high end soon it could hurt them badly on the margin side in the future quarters as they'd be forced to spin up on 7nm while wafer prices are still high. AMD has had problems focusing on CPU and GPU at the same time. If they focus on CPU their GPU side tends to slip and the reverse if they focus on GPU. Its one of the things Lisa Su needs to fix at AMD. AMD needs strong division leads that can move forward aggressively in both product segments. Until they can perform strongly in both divisions the company isn't fixed. Lisa has done a great job on the CPU side but the GPU side is still lagging, and that's leaving nVidia room to elevate prices, reduce innovation and milk the segment.
  • yannigr2 - Saturday, November 23, 2019 - link

    In other words

    V100 Super
  • AshlayW - Saturday, November 23, 2019 - link

    Full GV100? All 5376 CC?
  • Kjella - Saturday, November 23, 2019 - link

    Neat, but I wish they'd offer a budget deep learning card. So many models assume you'll have 11+ GB of memory and will crash if they go OOM making the 1080 Ti / 2080 Ti the low bar for entry. Something like the RTX 2060 but with 12GB RAM instead of 6GB would be a perfect training box.
  • Rudde - Saturday, November 23, 2019 - link

    Quadro RTX 5000 is basically a Geforce RTX 2080 super with 16GB memory.
    The T4 accelerator is a Geforce RTX 2070 super with 16GB memory that is downclocked to 600 MHz.
    The P6 accelerator is a Geforce GTX 1070 ti with 16GB memory downclocked to 1000 MHz.
    Quadro P5000 is a Geforce GTX 1080 with 16GB memory.

    I'm not saying that these are cheaper though, as I am unaware of their prices.
  • p1esk - Sunday, November 24, 2019 - link

    All these pro cards you mentioned are significantly more expensive than RTX 2080Ti. The problem is that even 11GB of memory is too little to train any decent model. Most serious DL research today is done on 8xV100 servers.
  • brucethemoose - Monday, November 25, 2019 - link

    One option is to rent a multi GPU rig from someone like vast.ai, or the many other services out there. It makes financial sense if you aren't training 24/7.

    But yeah, you're right. It would be awesome if Nvidia manufacturers had the wiggle room to make their down double capacity cards, like the old 4GB GTX 680s or the 8GB 290X. But their hands are obviously tied for whatever reason, as otherwise there would be double capacity RTX and Pascal cards everywhere.
  • CiccioB - Monday, November 25, 2019 - link

    "The new GPU we saw was called the V100S (or V100s). "

    Apparently this is not a new GPU but simply a new card.
    The GPU name is GV100, the card is V100.
    Having a V100S means a new board, not necessarily a new ASIC AFAIK.
  • marxxx - Monday, November 25, 2019 - link

    Official specifications https://www.nvidia.com/en-us/data-center/tesla-v10...
  • CiccioB - Monday, November 25, 2019 - link

    Unfortunately it just states that they are using faster memory, but not if there's a different cut version of GV100 or if they achieved the higher performances just by an increment of the frequencies.
  • jabbadap - Monday, November 25, 2019 - link

    Well datasheet says 5120 cuda cores, but then again is that really so is another question. 16.4TFlops means core that clocks being 1.6GHz, not out of question clocks. But it being with
    the same TDP as Tesla V100 pcie, it sounds a bit odd. Could newer faster HBM2 be much more power efficient, thus releasing more power budget to gpu or is it just some marketing boost clocks trick.

    https://images.nvidia.com/content/technologies/vol...
  • CiccioB - Monday, November 25, 2019 - link

    Oh, I see, they are 5120 cuda cores for all variants.
    So this is not a new GPU at all, just a new board with new HBM2e (Aquabolt?) and increased boost frequencies.
  • AshlayW - Monday, November 25, 2019 - link

    Full GV100 silicon has 5376 CUDA cores. I wonder if we will ever see it fully enabled? Probably never because yields on that absolutely enormous chunk of silicon.

Log in

Don't have an account? Sign up now