Comments Locked

24 Comments

Back to Article

  • ArthurG - Tuesday, August 12, 2014 - link

    no article on Nvidia Denver ?
  • Morawka - Tuesday, August 12, 2014 - link

    cuz it's faster than apple's a7 silly
  • extide - Wednesday, August 13, 2014 - link

    No, because it is vaporware
  • Gunbuster - Tuesday, August 12, 2014 - link

    Fantastic plastic! I know these get hidden in workstations but do they have to sport dollar store plastic shrouds? I mean the K5200 is going to be a $2000 something card right?
  • TiGr1982 - Tuesday, August 12, 2014 - link

    These are professional cards for doing work, not for bragging. Professionals don't care how they look, but only how they work.
  • TETRONG - Tuesday, August 12, 2014 - link

    Can somebody clue me in? Why is it that GPU's all seem to be backing off bigger memory buses?
    256 bit...isn't that simply an artificial constraint?

    Really curious about this - Shouldn't 512 and higher be considered the norm at this time, especially for professional usage?
  • akdj - Tuesday, August 12, 2014 - link

    With memory bandwidth @ 288GB/s and 12GB RAM clocked @ 6GHz, 256mb is plenty. Quite a lot actually. Almost 175 3.5" floppies guiding a three billion opposing traffic system. They're good at what they do;). I'm not so sure it needs to be larger as 256 is actually quite massive. For a memory bus. These are going to be incredible cards.
  • RussianSensation - Tuesday, August 12, 2014 - link

    288GB/sec on K6000 is achieved with a 384-bit bus, not 256-bit. Your numbers aren't lining up.
  • Morawka - Tuesday, August 12, 2014 - link

    the smaller the memory bus, the smaller the DIE is. The more you harvest from a wafer.

    Companies are going to small bus, big seats approach by putting a lot of cache to offset.

    TLDR: to increase profit margins
  • MrSpadge - Wednesday, August 13, 2014 - link

    "the smaller the memory bus, the smaller the DIE is"

    I think he was referring more to the fact that K5200 uses an expensive GK110 chip with 384 bit memory bus, yet artificially limits its performance by only using 256 of these bits. The wider memory bus could give nVidia "free" performance (only slightly more complex PCB) from the same chip, or similar performance from a chip running a more power efficient setting.
  • yik3000 - Tuesday, August 12, 2014 - link

    So in the perspective of 3DS Max user, with the recent improvement of viewport performance for gaming GPU and V-Ray RT for CUDA, is it necessary to spend crazy money on Quadros anymore?
  • mapesdhs - Tuesday, August 12, 2014 - link


    That depends on whether or not you want the quality. Gamer cards are simply not designed
    to be hammered in the same way as pro cards, and then there are differences in support
    structure, warranty, etc. It's true that various pro apps no longer show such a huge speed
    difference because vendors have deliberately shrunk the gap (as shown by Viewperf 12
    results), but if you decide to save on the pennies by getting a gamer card, don't be
    surprised if it fails a lot sooner than you'd like.

    Also, please note that one reason why gamer cards are, on paper, seemingly so
    fast for some pro apps now is because they cut an awful lot of corners as regards
    geometry precision and rendering quality. For example, the performance of a
    7970 for Viewperf 12 looks really impressive, but the image quality is nowhere
    near that of a Quadro. For many pro tasks, image precision is important, eg. CAD,
    medical, etc. I guess it's up to you whether the lower cost of a gamer card is worth
    sacrificing what pro cards have as extras, especially ECC, better compute, etc.

    Btw, there is of course always the used market. I bought a Quadro K5000 recently,
    it was only 550 UKP, excellent deal.

    Also, do note that for some pro apps the performance difference vs. gamer cards
    still persists, eg. CATIA and ProE.

    Ian.

    PS. If you want strong CUDA on a budget, get a bunch of used 3GB 580s. Four will be
    faster than two Titan Blacks, works very well, if you don't mind the power, noise, etc. :D
  • yik3000 - Tuesday, August 12, 2014 - link

    Thanks!!
  • mapesdhs - Tuesday, August 12, 2014 - link

    Apologies for following up my own post...

    I meant to say, IMO the K5200 is the most impressive new card NVIDIA has released
    in quite a long time. The shader bump, VRAM increase, etc., all add up to something
    which finally looks like a worthy upgrade for pro users stuck with older cards like the
    4000, etc., though it depends on the pricing I guess.

    Ian.
  • p1esk - Tuesday, August 12, 2014 - link

    "If you want strong CUDA on a budget, get a bunch of used 3GB 580s. Four will be
    faster than two Titan Blacks"

    What are you talking about? 580 has 512 cores. Titan Black has 2880 cores. Four 580 cards will be slower than a single Titan card.
  • Senti - Tuesday, August 12, 2014 - link

    They are very different cores. Think like old fat AMD CPU cores and new pathetic ones: yes, "core count" is greater, but performance?
  • p1esk - Tuesday, August 12, 2014 - link

    How exactly the cores are different?
  • Senti - Wednesday, August 13, 2014 - link

    First of all Kepler's cores work at doubled frequency. Second, for many computational loads how cores are packed in CC2.0 is more efficient than how it went from CC2.1: reached/theoretical FLOPS ratio usually is way higher for CC2.0 hardware.
  • Senti - Wednesday, August 13, 2014 - link

    edit: Fermi cores work at doubled frequency compared to Kepler ones.
  • p1esk - Wednesday, August 13, 2014 - link

    Interesting, I didn't know that. I looked up the comparison of 580 vs Titan, and it seems I was wrong, 580 is a decent performer for compute tasks. But still, I doubt 4 580s will outperform two Titan Blacks.
    Here's a good performance analysis:
    http://www.anandtech.com/show/6774/nvidias-geforce...
  • romrunning - Tuesday, August 12, 2014 - link

    Is there a performance comparison somewhere between these new Quadros and the new FirePros? Is would be nice to see someone do some SolidWorks tests on these new cards and existing cards to really show how they perform on an application for which workstations graphics cards are designed
  • nathanddrews - Tuesday, August 12, 2014 - link

    They were just announced today, so it will be a while before they make it to market and can be benched... Tom's Hardware recently covered everything from K6000/W9100 and below with lots of benchmarks. Very informative and should give you a baseline for what to expect from these new cards.
  • aabeba - Wednesday, August 13, 2014 - link

    I'm saving up some money over the summer for a PC that I'd like to use both for gaming at 1440p and 4K and for work in programs like Houdini, Maya and ZBrush. I'm still in the learning stages, but I'd like the computer to be powerful enough a few years later, when I'm doing more demanding work (mostly character modeling and animation). At a budget of around $7-10K, should I run with enthusiast i7 or Xeon CPU, and a Titan Black SLI config or a Quadro? Any tips and reasoning appreciated.
  • Sindalis - Friday, August 15, 2014 - link

    Quadro cards do not use GeForce drivers and are not optimized for games.

    The difference between the i7 and the Xeon is primarily the type of memory it uses. Xeon's support workstation and server ECC memory, i7's do not. You're probably going to want a Xeon so you can use ECC memory for your rendering. If ECC memory is not important to you, go with the i7.

    If you want something that's primarily a gaming machine with the ability to also be decent at graphic design. Go with Titan Blacks.

    If you want a machine that's primarily for graphic design with little focus or care on the gaming part. Go with Quadro.

Log in

Don't have an account? Sign up now