I have an all Nvidia rig with a G-sync display, 3D Vision 2, and a 980...but I would go with AMD for compute before Nvidia. Maybe Nvidia has made accessing their sub-par compute capabilities easier with CUDA, I suppose?
Got any references to support your snide remark about Nvidia's "sub-par" compute performance? Lets see some numbers showing how well your AMD card does programmed with OpenCL versus a Quadro M6000 running the same code using CUDA. Yeah, thought so....
For the comparison of compute (OP implied FP64) the M6000 is garbage and needs the AMD card to be utilized under 10% WHILE THE NVIDIA CARD IS AT 100%. Now, Nvidia offers a far more reasonable FP64 card than the M6000. At which point having "only" about 70% of AMD's raw power is enough thanks to their better software.
And, Nvidia is in great part a software company. I just wish they could write good software outside of professional realms. Most of the sh*t they've written for gaming has been intentionally coded poorly or an inefficient manner. Or they don't know how to code.
Maybe, but, having better software, etc. Allows one to utlize hardware better, etc. My point was that if you're looking to do heavy FP64 compute talking about the M6000 is pointless. As you would have to try to screw the AMD option to get worse performance.
Raw power is bad for many comparisons, but, if you have over a magnitude more power it probably is quite meaningful for performance.
I think you are basing your judgment of the relative compute performances of AMD's and NVIDIA's cards based on the compute benchmarks listed on enthusiast hardware sites such as Anandtech. That's a mistake. Kepler still seems to be preferred to AMD's latest compute offerings, judging by the July Top500 list, and this new NOAA system will most likely use Pascal.
Maybe Nvidia has made accessing their sub-par compute capabilities easier with CUDA, I suppose?
1) Even AMD recognized that C++ in general (and CUDA in particular) have much better support in HPC , which is why AMD will be adding ability to use CUDA code to compile for AMD.
2) I do not think that NVIDIA's Tesla cards are 'sub-par' to AMD's FirePro cards for 64bit HPC. Also, NVIDIA consumer cards are not 'sub-par' to AMD cards for 32bit compute.
BTW, I think that it is mostly CUDA and maturity of NVIDIA HPC environment that is #1 reason for NVIDIA advantage over AMD in HPC.
"The cluster will be operational next year, and giving the timing and the wording as a “next-generation” cluster, it’s reasonable to assume that this will be Pascal powered like Summit and Sierra."
Summit and Sierra are planned to use Volta, I believe, not Pascal.
I currently run PC based Home Security Video systems with software called Sighthound Video which performs analytics on the stream to determine if a human form comes into view and send video notification to phones. In my experience it requires at least an i7 cpu is required to analyze 4 1080p streams concurrently. Can anyone tell me if the M4 in this article would improve performance in this situation allowing analysis of more streams concurrently?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
15 Comments
Back to Article
wingless - Monday, November 16, 2015 - link
I have an all Nvidia rig with a G-sync display, 3D Vision 2, and a 980...but I would go with AMD for compute before Nvidia. Maybe Nvidia has made accessing their sub-par compute capabilities easier with CUDA, I suppose?testbug00 - Monday, November 16, 2015 - link
Given they're using Pascal cards they likely have FP64 capabilities added back in.And there are plenty of cases where FP32 is enough. Although I don't know if this is one of them.
mas6700 - Monday, November 16, 2015 - link
Got any references to support your snide remark about Nvidia's "sub-par" compute performance? Lets see some numbers showing how well your AMD card does programmed with OpenCL versus a Quadro M6000 running the same code using CUDA. Yeah, thought so....testbug00 - Monday, November 16, 2015 - link
You do realize Wingless is talking about FP64? Right.AMD needs less than 10% utilization for their too end pro card card to beat M6000 at 100%.
HighTech4US - Monday, November 16, 2015 - link
And AMD lacks greatly in the software infrastructure thus the lack of AMD wins.Running at the 100% speed you mention does no good if your wheels are off the ground.
People like yourself Poo poo-ed when Nvidia stated they were a software company.
Software sells hardware as everyone can see by the current earnings reports by Nvidia (Great) and AMD (Sucks).
testbug00 - Monday, November 16, 2015 - link
For the comparison of compute (OP implied FP64) the M6000 is garbage and needs the AMD card to be utilized under 10% WHILE THE NVIDIA CARD IS AT 100%. Now, Nvidia offers a far more reasonable FP64 card than the M6000. At which point having "only" about 70% of AMD's raw power is enough thanks to their better software.And, Nvidia is in great part a software company. I just wish they could write good software outside of professional realms. Most of the sh*t they've written for gaming has been intentionally coded poorly or an inefficient manner. Or they don't know how to code.
Yojimbo - Monday, November 16, 2015 - link
That "raw power" is as terrible a measurement for compute as "fill rate" is for graphics performance. It's more than just "better software", I think.testbug00 - Tuesday, November 17, 2015 - link
Maybe, but, having better software, etc. Allows one to utlize hardware better, etc. My point was that if you're looking to do heavy FP64 compute talking about the M6000 is pointless. As you would have to try to screw the AMD option to get worse performance.Raw power is bad for many comparisons, but, if you have over a magnitude more power it probably is quite meaningful for performance.
Yojimbo - Monday, November 16, 2015 - link
Yes but no one would buy an M6000 for FP64. M6000 is a workstation graphics card anyway, not a compute card. Why does it need FP64?testbug00 - Tuesday, November 17, 2015 - link
I fully agree. My response is to mas6700 who suggested such a card.Yojimbo - Monday, November 16, 2015 - link
I think you are basing your judgment of the relative compute performances of AMD's and NVIDIA's cards based on the compute benchmarks listed on enthusiast hardware sites such as Anandtech. That's a mistake. Kepler still seems to be preferred to AMD's latest compute offerings, judging by the July Top500 list, and this new NOAA system will most likely use Pascal.Nenad - Friday, November 20, 2015 - link
1) Even AMD recognized that C++ in general (and CUDA in particular) have much better support in HPC , which is why AMD will be adding ability to use CUDA code to compile for AMD.
2) I do not think that NVIDIA's Tesla cards are 'sub-par' to AMD's FirePro cards for 64bit HPC. Also, NVIDIA consumer cards are not 'sub-par' to AMD cards for 32bit compute.
BTW, I think that it is mostly CUDA and maturity of NVIDIA HPC environment that is #1 reason for NVIDIA advantage over AMD in HPC.
Yojimbo - Monday, November 16, 2015 - link
"The cluster will be operational next year, and giving the timing and the wording as a “next-generation” cluster, it’s reasonable to assume that this will be Pascal powered like Summit and Sierra."Summit and Sierra are planned to use Volta, I believe, not Pascal.
Ryan Smith - Monday, November 16, 2015 - link
Right you are! Thanks.LarryMoe - Saturday, November 21, 2015 - link
I currently run PC based Home Security Video systems with software called Sighthound Video which performs analytics on the stream to determine if a human form comes into view and send video notification to phones. In my experience it requires at least an i7 cpu is required to analyze 4 1080p streams concurrently. Can anyone tell me if the M4 in this article would improve performance in this situation allowing analysis of more streams concurrently?