NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
by Anand Lal Shimpi on September 30, 2009 12:00 AM EST- Posted in
- GPUs
Efficiency Gets Another Boon: Parallel Kernel Support
In GPU programming, a kernel is the function or small program running across the GPU hardware. Kernels are parallel in nature and perform the same task(s) on a very large dataset.
Typically, companies like NVIDIA don't disclose their hardware limitations until a developer bumps into one of them. In GT200/G80, the entire chip could only be working on one kernel at a time.
When dealing with graphics this isn't usually a problem. There are millions of pixels to render. The problem is wider than the machine. But as you start to do more general purpose computing, not all kernels are going to be wide enough to fill the entire machine. If a single kernel couldn't fill every SM with threads/instructions, then those SMs just went idle. That's bad.
GT200 (left) vs. Fermi (right)
Fermi, once again, fixes this. Fermi's global dispatch logic can now issue multiple kernels in parallel to the entire system. At more than twice the size of GT200, the likelihood of idle SMs went up tremendously. NVIDIA needs to be able to dispatch multiple kernels in parallel to keep Fermi fed.
Application switch time (moving between GPU and CUDA mode) is also much faster on Fermi. NVIDIA says the transition is now 10x faster than GT200, and fast enough to be performed multiple times within a single frame. This is very important for implementing more elaborate GPU accelerated physics (or PhysX, great ;)…).
The connections to the outside world have also been improved. Fermi now supports parallel transfers to/from the CPU. Previously CPU->GPU and GPU->CPU transfers had to happen serially.
415 Comments
View All Comments
SiliconDoc - Thursday, October 1, 2009 - link
The R600 was great, you idiot.Of course, when hating nvidia is your real gig, I don't expect you to do anything but be parrot off someone else's text and get the idea wrong, get the repeating incorrect.
-
The R600 was and is great, and has held up a long time, like the G80. Of course if you actually had a clue, you'd know that, and be aware that you refuted your own attempt at a counterpoint, since the R600 was "great on paper" and also "in gaming machines".
It's a lot of fun when so many fools self-proof it trying to do anything other than scream lunatic.
Great job, you put down a really good ATI card, and slapped yourself and your point, doing it. It's pathetic, but I can;t claim it's not SOP, so you have plenty of company.
papapapapapapapababy - Wednesday, September 30, 2009 - link
because both ms and sony are copying nintendo...that means, next consoles > minuscule speed bump, low price and (lame) motion control attached. All this tech is useless with no real killer ap EXCLUSIVE FOR THE PC! But hey who cares, lets play PONG at 900 fps !
Lonyo - Wednesday, September 30, 2009 - link
Did you even read the article?The point of this tech is to move away from games, so the killer app for it won't be games, but HPC programs.
SiliconDoc - Thursday, October 1, 2009 - link
I think the point is - the last GT200 was ALSO TESLA -- and so of course...It's the SECOND TIME the red roosters can cluck and cluck and cluck "it won't be any good" , and "it's not for gaming".
LOL
Wrong before, wrong again, but never able to learn from their mistakes, the barnyard animals.
Zingam - Thursday, October 1, 2009 - link
Last time I bought the most expensive GPU available was Riva TNT!Sorry but even if they offer this for gamers I won't be able to buy it. It is high above my budget.
I'd buy based on quality/price/features! And not based on who has the better card on paper in year 20xx.
SiliconDoc - Thursday, October 1, 2009 - link
Well, for that, I am sorry in a sense, but on the other hand find it hard to believe, depending upon your location in the world.Better luck if you're stuck in a bad place, and good luck on keeping your internet connection in that case.
ClownPuncher - Thursday, October 1, 2009 - link
Or maybe he has other priorities besides being an asshole.SiliconDoc - Thursday, October 1, 2009 - link
Being unable, and choosing not to, are two different things.And generally speaking ati users are unable, and therefore cannot choose to, because they sit on that thing you talk about being.
Now that's how you knockout a clown.
Lord 666 - Wednesday, September 30, 2009 - link
That actually just made my day; seeing a VP of Marketing speak their mind.Cybersciver - Friday, October 2, 2009 - link
Yeah, that was cool.Don't know about you guys, but my interest in GPU's is gaming @ 1920X1200. From that pov it looks like Nvidia's about to crack a coconut with a ten-ton press.
My 280 runs just about everything flat-out (except Crysis naturally)and the 5850 beats it. So why spend more? Most everything's a consul port these days and they aren't slated for an upgrade till 2012, least last I heard.
Boo hoo.
Guess that's why multiple-screen gaming strating to be pushed.
No way Jose.