NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
by Anand Lal Shimpi on September 30, 2009 12:00 AM EST- Posted in
- GPUs
ECC Support
AMD's Radeon HD 5870 can detect errors on the memory bus, but it can't correct them. The register file, L1 cache, L2 cache and DRAM all have full ECC support in Fermi. This is one of those Tesla-specific features.
Many Tesla customers won't even talk to NVIDIA about moving their algorithms to GPUs unless NVIDIA can deliver ECC support. The scale of their installations is so large that ECC is absolutely necessary (or at least perceived to be).
Unified 64-bit Memory Addressing
In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.
Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.
The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.
The other big change to memory addressability is in the size of the address space. G80 and GT200 had a 32-bit address space, but next year NVIDIA expects to see Tesla boards with over 4GB of GDDR5 on board. Fermi now supports 64-bit addresses but the chip can physically address 40-bits of memory, or 1TB. That should be enough for now.
Both the unified address space and 64-bit addressing are almost exclusively for the compute space at this point. Consumer graphics cards won't need more than 4GB of memory for at least another couple of years. These changes were painful for NVIDIA to implement, and ultimately contributed to Fermi's delay, but necessary in NVIDIA's eyes.
New ISA Changes Enable DX11, OpenCL and C++, Visual Studio Support
Now this is cool. NVIDIA is announcing Nexus (no, not the thing from Star Trek Generations) a visual studio plugin that enables hardware debugging for CUDA code in visual studio. You can treat the GPU like a CPU, step into functions, look at the state of the GPU all in visual studio with Nexus. This is a huge step forward for CUDA developers.
Nexus running in Visual Studio on a CUDA GPU
Simply enabling DX11 support is a big enough change for a GPU - AMD had to go through that with RV870. Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.
415 Comments
View All Comments
SiliconDoc - Thursday, October 1, 2009 - link
The R600 was great, you idiot.Of course, when hating nvidia is your real gig, I don't expect you to do anything but be parrot off someone else's text and get the idea wrong, get the repeating incorrect.
-
The R600 was and is great, and has held up a long time, like the G80. Of course if you actually had a clue, you'd know that, and be aware that you refuted your own attempt at a counterpoint, since the R600 was "great on paper" and also "in gaming machines".
It's a lot of fun when so many fools self-proof it trying to do anything other than scream lunatic.
Great job, you put down a really good ATI card, and slapped yourself and your point, doing it. It's pathetic, but I can;t claim it's not SOP, so you have plenty of company.
papapapapapapapababy - Wednesday, September 30, 2009 - link
because both ms and sony are copying nintendo...that means, next consoles > minuscule speed bump, low price and (lame) motion control attached. All this tech is useless with no real killer ap EXCLUSIVE FOR THE PC! But hey who cares, lets play PONG at 900 fps !
Lonyo - Wednesday, September 30, 2009 - link
Did you even read the article?The point of this tech is to move away from games, so the killer app for it won't be games, but HPC programs.
SiliconDoc - Thursday, October 1, 2009 - link
I think the point is - the last GT200 was ALSO TESLA -- and so of course...It's the SECOND TIME the red roosters can cluck and cluck and cluck "it won't be any good" , and "it's not for gaming".
LOL
Wrong before, wrong again, but never able to learn from their mistakes, the barnyard animals.
Zingam - Thursday, October 1, 2009 - link
Last time I bought the most expensive GPU available was Riva TNT!Sorry but even if they offer this for gamers I won't be able to buy it. It is high above my budget.
I'd buy based on quality/price/features! And not based on who has the better card on paper in year 20xx.
SiliconDoc - Thursday, October 1, 2009 - link
Well, for that, I am sorry in a sense, but on the other hand find it hard to believe, depending upon your location in the world.Better luck if you're stuck in a bad place, and good luck on keeping your internet connection in that case.
ClownPuncher - Thursday, October 1, 2009 - link
Or maybe he has other priorities besides being an asshole.SiliconDoc - Thursday, October 1, 2009 - link
Being unable, and choosing not to, are two different things.And generally speaking ati users are unable, and therefore cannot choose to, because they sit on that thing you talk about being.
Now that's how you knockout a clown.
Lord 666 - Wednesday, September 30, 2009 - link
That actually just made my day; seeing a VP of Marketing speak their mind.Cybersciver - Friday, October 2, 2009 - link
Yeah, that was cool.Don't know about you guys, but my interest in GPU's is gaming @ 1920X1200. From that pov it looks like Nvidia's about to crack a coconut with a ten-ton press.
My 280 runs just about everything flat-out (except Crysis naturally)and the 5850 beats it. So why spend more? Most everything's a consul port these days and they aren't slated for an upgrade till 2012, least last I heard.
Boo hoo.
Guess that's why multiple-screen gaming strating to be pushed.
No way Jose.