3D Vision Surround: NVIDIA’s Eyefinity
During our meeting with NVIDIA, they were also showing off 3D Vision Surround, which was announced at the start of CES at their press conference. 3D Vision Surround is not inherently a GF100 technology, but since it’s being timed for release along-side GF100 cards, we’re going to take a moment to discuss it.
If you’ve seen Matrox’s TripleHead2Go or AMD’s Eyefinity in action, then you know what 3D Vision Surround is. It’s NVIDIA’s implementation of the single large surface concept so that games (and anything else for that matter) can span multiple monitors. With it, gamers can get a more immersive view by being able to surround themselves with monitors so that the game world is projected from more than just a single point in front of them.
NVIDIA tells us that they’ve been sitting on this technology for quite some time but never saw a market for it. With the release of TripleHead2Go and Eyefinity it became apparent to them that this was no longer the case, and they unboxed the technology. Whether this is true or a sudden reaction to Eyefinity is immaterial at the moment, as it’s coming regardless.
This triple-display technology will have two names. When it’s used on its own, NVIDIA is calling it NVIDIA Surround. When it’s used in conjunction with 3D Vision, it’s called 3D Vision Surround. Obviously NVIDIA would like you to use it with 3D Vision to get the full effect (and to require a more powerful GPU) but 3D Vision is by no means required to use it. It is however the key differentiator from AMD, at least until AMD’s own 3D efforts get off the ground.
Regardless of to what degree this is a sudden reaction from NVIDIA over Eyefinity, ultimately this is something that was added late in to the design process. Unlike AMD who designed the Evergreen family around it from the start, NVIDA did not, and as a result they did not give a GF100 the ability to drive more than 2 displays at once. The shipping GF100 cards will have the traditional 2 monitor limit, meaning that gamers will need 2 GF100 cards in SLI to drive 3+ monitors, with the second card needed to provide the 3rd and 4th display outputs. We expect that the next NVIDIA design will include the ability to drive 3+ monitors from a single GPU, as for the moment this limitation precludes any ability to do Surround for cheap.
GTX 280 with 2 display outputs: GF100 won't be any different
As for some good news, as we stated earlier this is not a technology inherent to the GF100. NVIDIA can do it entirely in software and as a result will be backporting this technology to the GT200 (GTX 200 series). The drivers that get released for the GF100 will allow GTX 200 cards to do Surround in the same manner: with 2 cards, you can run a single large surface across 3+ displays. We’ve seen this in action and it works, as NVIDIA was demoing a pair of GTX 285s running in NVIDIA Surround mode in their CES booth.
The big question of course is going to be what this does for performance on both the GF100 and GT200, along with compatibility. That’s something that we’re going to have to wait on the actual hardware for.
115 Comments
View All Comments
dentatus - Monday, January 18, 2010 - link
" Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world"- they have, its called the radeon HD5970.Really, in my Australia, the ATI DX11 hardware represents nothing close to value. The "biggest, most expensive, hottest and fastest card in the world" a.k.a HD5970 weighs in at a ridiculous AUD 1150. In the meantime the HD5850 jumped up from AUD 350 to AUD 450 on average here.
The "smaller, more affordable, better value" line I was used to associating with ATI went out the window the minute their hardware didn't have to compete with nVidia DX11 hardware.
Really, I'm not buying any new hardware until there's some viable alternatives at the top and some competition to burst ATI's pricing bubble. That's why it'd be good to see GF100 make a "G80" impression.
mcnabney - Monday, January 18, 2010 - link
You have no idea what a market economy is.If demand outstrips supply prices WILL go up. They have to.
nafhan - Monday, January 18, 2010 - link
It's mentioned in the article, but nvidia being late to market is why prices on ATI's cards are high. Based on transistor count, etc. There's plenty of room for ATI to drop prices once they have some competition.Griswold - Wednesday, January 20, 2010 - link
And thats where the article is dead wrong. For the most part, the ridiculous prices were dictated by low supply vs. high demand. Now, we finally arrived at decent supply vs. high demand and prices are dropping. The next stage may be good supply vs normal demand. That, and no second earlier, is when AMD themselves could willingly start price gouging due to no competition.However, the situation will be like this long after Thermi launched for the simple reason, that there is no reason to believe that Thermi wont have yield issues for quite some time after they have been sorted out for AMD - its the size of chipzilla that will give it a rough time for the first couple of months, regardless of its capabilities.
chizow - Monday, January 18, 2010 - link
I'm sure ATI would've if they could've instead of settling for 2nd place most of the past 3 years, but GF100 isn't just about the performance crown, its clearly setting the table for future variants based on its design changes for a broader target audience (think G92).bupkus - Monday, January 18, 2010 - link
So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.Are you saying that ATI's viability and funding resources for R&D are not supported by the majority of sales which traditionally fall into the lower priced hardware which btw requires smaller and cheaper GPUs?
Targon - Wednesday, January 20, 2010 - link
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
Targon - Wednesday, January 20, 2010 - link
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
chizow - Monday, January 18, 2010 - link
@bupkus, no, but I can see a monster strawman coming from a mile away.Calin - Monday, January 18, 2010 - link
"Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better"No it won't.
If the game will ship with the "high resolution" displacement mappings, NVidia could make use of them (and AMD might not, because of the geometry power involved). If the game won't ship with the "high resolution" displacement maps to use for tesselation, then NVidia will only have a lot of geometry power going to waste, and the same graphical quality as AMD is having.
Remember that in big graphic game engines, there are multiple "video paths" for multiple GPU's - DirectX 8, DirectX 9, DirectX 10, and NVidia and AMD both have optimised execution paths.