NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
by Anand Lal Shimpi on September 30, 2009 12:00 AM EST- Posted in
- GPUs
A Different Sort of Launch
Fermi will support DirectX 11 and NVIDIA believes it'll be faster than the Radeon HD 5870 in 3D games. With 3 billion transistors, it had better be. But that's the extent of what NVIDIA is willing to talk about with regards to Fermi as a gaming GPU. Sorry folks, today's launch is targeted entirely at Tesla.
A GeForce GTX 280 with 4GB of memory is the foundation for the Tesla C1060 cards
Tesla is NVIDIA's High Performance Computing (HPC) business. NVIDIA takes its consumer GPUs, equips them with a ton of memory, and sells them in personal or datacenter supercomputers called Tesla supercomputers or computing clusters. If you have an application that can run well on a GPU, the upside is tremendous.
Four of those C1060 cards in a 1U chassis make the Tesla S1070. PCIe connects the S1070 to the host server.
NVIDIA loves to cite examples of where algorithms ported to GPUs work so much better than CPUs. One such example is a seismic processing application that HESS found ran very well on NVIDIA GPUs. It migrated a cluster of 2000 servers to 32 Tesla S1070s, bringing total costs down from $8M to $400K, and total power from 1200kW down to 45kW.
HESS Seismic Processing Example | Tesla | CPU |
Performance | 1 | 1 |
# of Machines | 32 Tesla S1070s | 2000 x86 servers |
Total Cost | ~$400K | ~$8M |
Total Power | 45kW | 1200kW |
Obviously this doesn't include the servers needed to drive the Teslas, but presumably that's not a significant cost. Either way the potential is there, it's just a matter of how many similar applications exist in the world.
According to NVIDIA, there are many more cases like this in the market. The table below shows what NVIDIA believes is the total available market in the next 18 months for these various HPC segments:
Processor | Seismic | Supercomputing | Universities | Defence | Finance |
GPU TAM | $300M | $200M | $150M | $250M | $230M |
These figures were calculated by looking at the algorithms used in each segment, the number of Hess-like Tesla installations that can be done, and the current budget for non-GPU based computing in those markets. If NVIDIA met its goals here, the Tesla business could be bigger than the GeForce one. There's just one problem:
As you'll soon see, many of the architectural features of Fermi are targeted specifically for Tesla markets. The same could be said about GT200, albeit to a lesser degree. Yet Tesla accounted for less than 1.3% of NVIDIA's total revenue last quarter.
Given these numbers it looks like NVIDIA is building GPUs for a world that doesn't exist. NVIDIA doesn't agree.
The Evolution of GPU Computing
When matched with the right algorithms and programming efforts, GPU computing can provide some real speedups. Much of Fermi's architecture is designed to improve performance in these HPC and other GPU compute applications.
Ever since G80, NVIDIA has been on this path to bring GPU computing to reality. I rarely get the opportunity to get a non-marketing answer out of NVIDIA, but in talking to Jonah Alben (VP of GPU Engineering) I had an unusually frank discussion.
From the outside, G80 looks to be a GPU architected for compute. Internally, NVIDIA viewed it as an opportunistic way to enable more general purpose computing on its GPUs. The transition to a unified shader architecture gave NVIDIA the chance to, relatively easily, turn G80 into more than just a GPU. NVIDIA viewed GPU computing as a future strength for the company, so G80 led a dual life. Awesome graphics chip by day, the foundation for CUDA by night.
Remember that G80 was hashed out back in 2002 - 2003. NVIDIA had some ideas of where it wanted to take GPU computing, but it wasn't until G80 hit that customers started providing feedback that ultimately shaped the way GT200 and Fermi turned out.
One key example was support for double precision floating point. The feature wasn't added until GT200 and even then, it was only added based on computing customer feedback from G80. Fermi kicks double precision performance up another notch as it now executes FP64 ops at half of its FP32 rate (more on this later).
While G80 and GT200 were still primarily graphics chips, NVIDIA views Fermi as a processor that makes compute just as serious as graphics. NVIDIA believes it's on a different course, at least for the short term, than AMD. And you'll see this in many of the architectural features of Fermi.
415 Comments
View All Comments
SiliconDoc - Wednesday, September 30, 2009 - link
No they did not post earnings, other than in the sense IN THE RED LOSSES called sales.shotage - Wednesday, September 30, 2009 - link
I'm not sure what your argument is SiliconDuck..But maybe you should stop typing and go into hybernation to await the GT300's holy ascension from heaven! FYI: It's unhealthy to have shrines dedicated to silicon dude. Get off the GPU cr@ck!!!
On a more serious note: Nvidia are good, ATI has gotten a lot better though.
I just bought a GTX260 recently, so I'm in no hurry to buy at the moment. I'll be eagerly awaiting to see what happens when Nvidia actually have the product launch and not just some lame paper/promo launch.
SiliconDoc - Wednesday, September 30, 2009 - link
My aregument is I've heard the EXACT SAME geekfoot whine before, twice in fact. Once for G80, once for GT200, and NOW, again....Here is what the guy said I responded to:
" Nvidia is painting itself into a corner in terms of engineering and direction. As a graphical engine, ATI's architecture is both smaller, cheaper to manufacture and scales better simply by combining chips or expanding # of units as mfg tech improves.. As a compute engine, Intel's Larabee will have unmatched parallel thread processing horsepower. What is Nvidia thinking trying to pass on this huge, monolithic albatross? It will lose on both fronts. "
---
MY ARGUMENT IS : A red raging rooster who just got their last two nvidia destruction calls WRONG for G80 and GT200 (the giant brute force non-profit expensive blah blah blah), are likely to the tune of 100% - TO BE GETTING THIS CRYING SPASM WRONG AS WELL.
---
When there is clear evidence Nvidia has been a markleting genius (it's called REBRANDING by the bashing red rooster crybabies) and has a billion bucks to burn a year on R&D, the argument HAS ALREADY BEEN MADE FOR ME.
-----
The person you should be questioning is the opinionated raging nvidia disser, who by all standards jives out an arrogant WHACK JOB on nvidia, declaring DUAL defeat...
QUOTETH ! "What is Nvidia thinking trying to pass on this huge, monolithic albatross? It will lose on both fronts. "
---
LOL that huge monolithic albatross COMMANDS $475,000.00 for 4 of them in some TESLA server for the collegiate geeks and freaks all over the world- I don't suppose there is " loss on that front" do you ?
ROFLMAO
Who are you questioning and WHY ? Why aren't you seeing clearly ? Did the reds already brainwash you ? Have the last two gigantic expensive cores "destroyed nvidia" as they predicted?
--
In closing "GET A CLUE".
shotage - Wednesday, September 30, 2009 - link
Found my clue.. I hope you get help in time: http://www.physorg.com/news171819640.html">http://www.physorg.com/news171819640.htmlSiliconDoc - Thursday, October 1, 2009 - link
You are your clue, and here is your buddy, your duplicate:" What is Nvidia thinking trying to pass on this huge, monolithic albatross? It will lose on both fronts."
Now, I quite understand denial is a favorite pasttime of losers, and you've effectively joined the red club. Let me convert for you.
" What is Ati thinking trying to pass on this over length, heat soaked, barely better afterthought? It will lose on it's only front."
-there you are schmucko, a fine example of real misbehavior you pass-
AaronJD - Wednesday, September 30, 2009 - link
While I definitely prefer the $200-$300 space that ATI released 48xx at, It seems like $400 is the magic number for single GPUs. Anything much higher than that is in multi-GPU space where you can get away with a higher price to performance ratio.If Nvidia can hit the market with well engineered $400 or so card that is easily pared down, then they can hit a market ATI would have trouble scaling to while being able to easily re-badge gimped silicon to meet whatever market segment they can best compete in with whatever quality yield they get.
Regarding Larabee, I think Nvidia's strategy is to just get in the door first. To compete against Intel's first offering they don't need to do something special, they just need to get the right feature set out there. If they can get developers writing for their hardware asap Tesla will have done its job.
Zingam - Thursday, October 1, 2009 - link
Until that thing from NVIDIA comes out AMD has time to work on a response and if they are not lazy or stupid they'll have a match for it.So in any way I believe that things are going to get more interesting than ever in the next 3 years!!!
:D ?an't wait to hear what DirectX 12 will be like!!!
My guess is that in 5 years we will have a truly new CPUs - that would do what GPUs + CPUs are doing together today.
Perhaps will come to the point where we'll get blade like home PCs. If you want more power you just shove in another board. Perhaps PC architecture will change completely once software gets ready for SMP.
chizow - Wednesday, September 30, 2009 - link
Nvidia is also launching Nexus at their GDC this week, a plug-in for Visual Studio that will basically integrate all of these various API under an industry standard IDE. That's the launching point imo for cGPU, Tesla and everything else Nvidia is hoping to accompolish outside of the 3D Gaming space with Fermi.Making their hardware more accessible to create those next killer apps is what's been missing in the past with GPGPU and CUDA. Now it'll all be cGPU and transparent in your workflow within Visual Studio.
As for the news of Fermi as a gaming GPU, very excited on that front, but not all that surprised really. Nvidia was due for another home run and it looks like Fermi might just clear the ball park completely. Tough times ahead for AMD, but at least they'll be able to enjoy the 5850/5870 success for a few months.
ilkhan - Wednesday, September 30, 2009 - link
If it plays games faster/prettier at the same or better price, who cares what the architecture looks like?On a similar note, if the die looks like that first image (which is likely) chopping it to smaller price points looks incredibly easy.
papapapapapapapababy - Wednesday, September 30, 2009 - link
"Architecturally, there aren't huge lessons to be learned from RV770"SNIF SNIF BS!
"ATI's approach is much more cautious"
more like "ATI's approach is much more FOCUSED"
( eyes on the ball people)
"While Fermi will play games, it is designed to be a general purpose compute machine."
nvidia, is starting to sound like Sony " the ps3 is not a console its a supercomputer @ HD movie player, it only does everything" guess what? people wanted to play games, nintendo ( the focused company, did that > games, not movies, not hd graphics, games, motion control) Sony - like nvidia here- didn't have the eyes on the ball.