Comments Locked

155 Comments

Back to Article

  • Rodrigo - Thursday, May 23, 2013 - link

    Excellent choice for less money than Titan! :-)
  • Ja5087 - Thursday, May 23, 2013 - link

    "NVIDIA will be pricing the GTX 680 at $650, $350 below the GTX Titan and GTX 690, and around $200-$250 more than the GTX 680."

    I think you mean the 780?
  • Ja5087 - Thursday, May 23, 2013 - link

    Accidently replied instead of commented
  • Ryan Smith - Thursday, May 23, 2013 - link

    Thanks. Fixed.
  • nunomoreira10 - Thursday, May 23, 2013 - link

    compared to titan it sure is a better value, but compared to the hight end 2 years ago its twice as much ( titan vs 580 ; 780 vs 570 ; 680 vs 560)
    NVIDIA is slowly geting people acoustmed to hight prices again,
    im gona wait for AMD to see what she can bring to the table
  • Hrel - Friday, May 24, 2013 - link

    She? AMD is a she now?
  • SevenWhite7 - Monday, July 8, 2013 - link

    Yeah, 'cause AMD's more bang-for-the-buck.
    Basically, NVidia's 'he' 'cause it's always the most powerful, but also costs the most.
    AMD's 'she' 'cause it's always more efficient and reasonable.
    I'm a guy, and guys are usually more about power and girls are more about the overall package.
    Just my experience, anyway, and this is just me being dumb trying to explain it with analogies =P
  • sperkowsky - Wednesday, February 26, 2014 - link

    bang for your buck has changed a bit just sold my 7950 added 80 bucks and bought a evga acx 780 b stock
  • cknobman - Thursday, May 23, 2013 - link

    At $650 I am just not seeing it. In fact I dont even see this card putting any pressure on AMD to do something.

    I'd rather save $200+ and get a 7970GE. If Nvidia really wants to be aggressive they need to sell this for ~$550.
  • chizow - Thursday, May 23, 2013 - link

    Nvidia has the GTX 770 next week to match up against the 7970GE in that price bracket, the 780 is clearly meant to continue on the massive premiums for GK110 flagship ASIC started by Titan. While it may not justify the difference in price relative to 7970GHz it's performance, like Titan, is clearly in a different class.
  • Finally - Thursday, May 23, 2013 - link

    The GTX770 is a GTX680 with a different BIOS.
  • Degong330 - Thursday, May 23, 2013 - link

    Amused to see blind fanboy comments=
  • PaulRod - Friday, May 24, 2013 - link

    Well it is... slightly tweaked core, new bios, slightly improved performance... only worth buying if you're still on a 500/6000 series card or older.
  • YukaKun - Friday, May 24, 2013 - link

    Actually, it is true... At least, for the curent PCB GTX680's

    Cheers!
  • Ninjawithagun - Monday, May 27, 2013 - link

    No, Finally is correct - the GTX 770 really is a GTX680 with a different BIOS! Unfortunately, there is no way to flash an existing GTX680 to a GTX770, in spite of early reports that such a capability existed. It was found out that in fact, the BIOS that was used to flash a GTX680 to a GTX770 was in fact a fake. The BIOS was a modified GTX680 BIOS made to look as if it were a GTX770 BIOS. Confused yet? lol The bottom line is that the only difference between a GTX680 and GTX770 is the clock speeds. The GTX770 comes in at around 11-12% faster clock speeds and as such is about that much faster in frame rate rendering in games. So if you already own one or more GTX680s, it is definitely NOT worth upgrading to a GTX770!
  • An00bis - Friday, May 31, 2013 - link

    this reminds me of the 7870, differences of under 10%, about 5% clock to clock compared to a 7850 that can OC the same, and people still buy it even though it's like $50 or more expensive than a 7850, just because it comes with a 1ghz OC, compared to a 7850 that only comes at about 800mhz stock.
  • DanNeely - Thursday, May 23, 2013 - link

    The 770 is using a revised version of the chip. While we're unlikely to see a large improvement it should run slightly faster for the same TDP.
  • Hrel - Friday, May 24, 2013 - link

    500
  • Machelios - Thursday, May 23, 2013 - link

    Better value than Titan, but still very niche...
    I'd like to see what Nvidia and AMD can bring at $250 in their next gen cards 
  • AssBall - Thursday, May 23, 2013 - link

    Agreed. A good video card should cost about as much as a good CPU, or a good MB.
  • varad - Thursday, May 23, 2013 - link

    You do realize that a GPU like Titan has almost 5 times the number of transistors compared to Intel's biggest Core i7 CPU, right? There are 7.1 billion transistors in Titan vs 1.4 billion in Core i7 3770k. So, it means they cannot match the price of "a good CPU" unless they decide to become a non-profit organization :)
  • AssBall - Thursday, May 23, 2013 - link

    Well if all you needed was a single titan to run your is, computations, games, and nothing else, then no problem.
  • krutou - Sunday, May 26, 2013 - link

    Two problems with your logic

    22 nm fabrication is more expensive (price per transistor)

    CPUs are more difficult to design
  • An00bis - Friday, May 31, 2013 - link

    it's not like you can just shove your hand in a jar full of transistors and just slap it on a chip and consider it a cpu, a cpu is required to do a gpu's task (integrated gpu) AND be good at everything a gpu can't do, which is... well lots of things actually. A gpu is much simpler, hence why the manufacturing + designing cost is probably more expensive than a big ass card that has to include memory+a pcb+a gpu
  • chizow - Thursday, May 23, 2013 - link

    Great card, but a year late. This is what GTX 600 series should've been but we all know how that went.

    I think Nvidia made some pretty big mistakes with how they handled the entire Kepler generation after Tahiti's launch price debacle. I know their financial statements and stockholders don't agree but they've managed to piss off their core consumers at every performance segment.

    Titan owners have to feel absolutely gutted at this point having paid $1000 for a part that is only ~10-15% faster than the GTX 780. End result of this generation is we are effectively paying 50-100% more for the same class of card than previous generations. While the 780 is a great card and a relatively good value compared to Titan, we're still paying $650 for what is effectively Kepler's version of the GTX 470.
  • Crisium - Thursday, May 23, 2013 - link

    People who bought a Titan knew what they were getting into. If you have regrets, you were in no position to buy a $1000 GPU to begin with and made a grievous financial error.

    $650 isn't horrible for this price, but you are still paying the Nvidia Tax.
  • chizow - Thursday, May 23, 2013 - link

    I don't think so, if you polled GTX Titan owners if they would've paid $1000 knowing 2-3 months later there would be a part that performed similarly at 35% less price, I think you would hear most of them would've waited to buy not 1, but 2 for just a bit more. Or instead of buying 2 Titans, buying 3x780s.

    Also, it really has nothing to do with being in a financial position or not, it's funny when Titan released I made the comment anyone interested in Titan would be better served by simply investing that money into Nvidia stock, letting that money grow on Titan's fat margins, and then buying 2x780s when they released. All according to plan, for my initial investment of 1 Titan I can buy 2x780s.

    But I won't. Nvidia blew it this generation, I'll wait for Maxwell.
  • IanCutress - Thursday, May 23, 2013 - link

    Titan was a compute card with optional gaming, rather than a gaming card with optional FP64 compute. That's why the price difference exists. If you bought a Titan card for Gaming, then you would/should have been smart enough to know that a similar card without compute was around the corner.
  • chizow - Thursday, May 23, 2013 - link

    Unfortunately, that was never how *GTX* Titan was marketed, straight from the horses mouth:
    "With the DNA of the world’s fastest supercomputer and the soul of NVIDIA® Kepler™ architecture, GeForce® GTX TITAN GPU is a revolution in PC gaming performance."

    Not to mention the fact Titan is a horrible compute card and value outside of CUDA workloads, and even there it suffers as a serious compute card due to the lack of ECC. It's an overpriced gaming card, plain and simple.

    At the time, it was still uncertain whether or not Nvidia would launch more SKUs based on GK110 ASIC, but informed consumers knew Nvidia had to do something with all the chips that didn't make the TDP cut as Tesla parts.
  • mayankleoboy1 - Thursday, May 23, 2013 - link

    Really ? Apart from a few apps, Titan is poor compared to a 7970. It has bad OpenGL performance, which 90% of industry renderfarms use.
    Titan is really an overpriced gaming card.
  • lukarak - Friday, May 24, 2013 - link

    1/3rd FP32 and 1/24th FP32 is nowhere near 10-15% apart. Gaming is not everything.
  • chizow - Friday, May 24, 2013 - link

    Yes fine cut gaming performance on 780 and Titan down to 1/24th and see how many of these you sell at $650 and $1000.
  • Hrel - Friday, May 24, 2013 - link

    THANK YOU!!!! WHY this kind of thing isn't IN the review is beyond me. As much good work as Nvidia is doing they're pricing schemes, naming schemes and general abuse of customers has turned me off of them forever. Which convenient because AMD is really getting their shit together quickly.
  • chizow - Saturday, May 25, 2013 - link

    Ryan has danced around this topic in the past, he's a pretty straight shooter overall but it goes without saying why he isn't harping on this in his review. He has to protect his (and AT's) relationship with Nvidia to keep the gravy train flowing. They have gotten in trouble with Nvidia in the past (sometime around the "not covering PhysX enough" fiasco, along with HardOCP) and as a result, their review allocation suffered.

    In the end, while it may be the truth, no one with a vested interest in these products and their future success contributing to their livelihoods wants to hear about it, I guess. It's just us, the consumers that suffer for it, so I do feel it's important to voice my opinion on the matter.
  • Ryan Smith - Sunday, May 26, 2013 - link

    While you are welcome to your opinion and I doubt I'll be able to change it, I would note that I take a dim view towards such unfounded nonsense.

    We have a very clear stance with NVIDIA: we write what we believe. If we like a product we praise it, if we don't like a product we'll say so, and if we see an issue we'll bring it up. We are the press and our role is clear; we are not any company's friend or foe, but a 3rd party who stakes our name and reputation (and livelihood!) on providing unbiased and fair analysis of technologies and products. NVIDIA certainly doesn't get a say in any of this, and the only thing our relationship is built upon is their trusting our methods and conclusions. We certainly don't require NVIDIA's blessing to do what we do, and publishing the truth has and always will come first, vendor relationships be damned. So if I do or do not mention something in an article, it's not about "protecting the gravy train", but about what I, the reviewer, find important and worth mentioning.

    On a side note, I would note that in the 4 years I have had this post, we have never had an issue with review allocation (and I've said some pretty harsh things about NVIDIA products at times). So I'm not sure where you're hearing otherwise.
  • chizow - Monday, May 27, 2013 - link

    Hi Ryan I respect your take on it and as I've said already, you generally comment on and understand more about the impact of pricing and economy more than most other reviews, which is a big part of the reason I appreciate AT reviews over others.

    That being said, much of this type of commentary about pricing/economics can be viewed as editorializing, so while I'm not in any way saying companies influence your actual review results and conclusions, the choice NOT to speak about topics that may be considered out of bounds for a review does not fall under the scope of your reputation or independence as a reviewer.

    If we're being honest here, we're all human and business is conducted between humans with varying degrees of interpersonal relationships. While you may consider yourself truthful and forthcoming always, the tendency to bite your tongue when friendships are at stake is only natural and human. Certainly, a "How's your family?" greeting is much warmer than a "Hey what's with all that crap you wrote about our GTX Titan pricing?" when you meet up at the latest trade show or press event. Similarly, it should be no surprise when Anand refers to various moves/hires at these companies as good/close friends, that he is going to protect those friendships where and when he can.

    In any case, the bit I wrote about allocation was about the same time ExtremeTech got in trouble with Nvidia and felt they were blacklisted for not writing enough about PhysX. HardOCP got in similar trouble for blowing off entire portions of Nvidia's press stack and you similarly glossed over a bunch of the stuff Nvidia wanted you to cover. Subsequently, I do recall you did not have product on launch day and maybe later it was clarified there was some shipping mistake. Was a minor release, maybe one of the later Fermi parts. I may be mistaken, but pretty sure that was the case.
  • Razorbak86 - Monday, May 27, 2013 - link

    Sounds like you've got an axe to grind, and a tin-foil hat for armor. ;)
  • ambientblue - Thursday, August 8, 2013 - link

    Well, you failed to note how the GTX 780 is essentially kepler's version of a GTX 570. It's priced twice as high though. The Titan should have been a GTX 680 last year... its only a prosumer card because of the price LOL. that's like saying the GTX 480 is a prosumer card!!!
  • cityuser - Thursday, May 23, 2013 - link

    whatever Nvidia do, it never improve their 2D quality, I mean , look at what nVidia will give you at BluRay playing, the color still dead , dull, not really enjoyable.
    It's terrible to use nVidia to HD home cinema, whatever setting you try.
    Why nVidia can ignore this? because it's spoiled.
  • Dribble - Thursday, May 23, 2013 - link

    What are you going on about?

    Bluray is digital, hdmi is digital - that means the signal is decoded and basically sent straight to the TV - there is no fiddling with colours, or sharpening or anything else required.
  • Stuka87 - Thursday, May 23, 2013 - link

    The video card does handle the decoding and rendering for the video. Anand has done several tests over the years comparing their video quality. There are definite differences between AMD/nVidia/Intel.
  • JDG1980 - Thursday, May 23, 2013 - link

    Yes, the signal is digital, but the drivers often have a bunch of post-processing options which can be applied to the video: deinterlacing, noise reduction, edge enhancement, etc.
    Both AMD and NVIDIA have some advantages over the other in this area. Either is a decent choice for a HTPC. Of course, no one in their right mind would use a card as power-hungry and expensive as a GTX 780 for a HTPC.

    In the case of interlaced content, either the PC or the display device *has* to apply post-processing or else it will look like crap. The rest of the stuff is, IMO, best left turned off unless you are working with really subpar source material.
  • Dribble - Thursday, May 23, 2013 - link

    To both of you above, on DVD yes, not on bluray - there is no interlacing, noise or edges to reduce - bluray decodes to a perfect 1080p picture which you send straight to the TV.

    All the video card has to do is decode it, which why a $20 bluray player with $5 cable will give you exactly the same picture and sound quality as a $1000 bluray player with $300 cable - as long as TV can take the 1080p input and hifi can handle the HD audio signal.
  • JDG1980 - Thursday, May 23, 2013 - link

    You can do any kind of post-processing you want on a signal, whether it comes from DVD, Blu-Ray, or anything else. A Blu-Ray is less likely to get subjective quality improvements from noise reduction, edge enhancement, etc., but you can still apply these processes in the video driver if you want to.

    The video quality of Blu-Ray is very good, but not "perfect". Like all modern video formats, it uses lossy encoding. A maximum bit rate of 40 Mbps makes artifacts far less common than with DVDs, but they can still happen in a fast-motion scene - especially if the encoders were trying to fit a lot of content on a single layer disc.

    Most Blu-Ray content is progressive scan at film rates (1080p23.976) but interlaced 1080i is a legal Blu-Ray resolution. I believe some variants of the "Planet Earth" box set use it. So Blu-Ray playback devices still need to know how to deinterlace (assuming they're not going to delegate that task to the display).
  • Dribble - Thursday, May 23, 2013 - link

    I admit it's possible to post process but you wouldn't, a real time post process is highly unlikely to add anything good to the picture - fancy bluray players don't post process, they just pass on the signal. As for 1080i that's a very unusual case for bluray, but as it's just the standard HD TV resolution again pass it to the TV - it'll de-interlace it just like it does all the 1080i coming from your cable/satelight box.
  • Galidou - Sunday, May 26, 2013 - link

    ''All the video card has to do is decode it, which why a $20 bluray player with $5 cable will give you exactly the same picture and sound quality as a $1000 bluray player with $300 cable - as long as TV can take the 1080p input and hifi can handle the HD audio signal.''

    I'm an audiophile and a professionnal when it comes to hi-end home theater, I myself built tons of HT system around PCs and or receivers and I have to admit this is the funniest crap I've had to read. I'd just like to know how many blu-ray players you've personnally compared up to let's say the OPPO BDP -105(I've dealt with pricier units than this mere 1200$ but still awesome Blu-ray player).

    While I can certainly say that image quality not affected by much, the audio on the other side sees DRASTIC improvements. Hardware not having an effect on sound would be like saying: there's no difference between a 200$ and a 5000$ integrated amplifier/receiver, pure non sense.

    ''the same picture and sound quality''

    The part speaking about sound quality should really be removed from your comment as it really astound me to think you can beleive what you said is true.
  • eddman - Thursday, May 23, 2013 - link

    http://i.imgur.com/d7oOj7d.jpg
  • EzioAs - Thursday, May 23, 2013 - link

    If I were a Titan owner (and I actually purchase the card, not some free giveaway or something), I would regret that purchase very, very badly. $650 is still a very high price for the normal GTX x80 cards but it makes the Titan basically a product with incredibly bad pricing (not that we don't know that already). Still, I'm no Titan owner, so what do I know...

    On the other hand, when I look at the graphs, I think the HD7970 is an even better card than ever despite it being 1.5 years older. However, as Ryan pointed out for previous GTX500 users who plan on sticking with Nvidia and are considering high end cards like this, it may not be a bad card at all since there are situations (most of the time) where the performance improvements are about twice the GTX580.
  • JeffFlanagan - Thursday, May 23, 2013 - link

    I think $350 is almost pocket-change to someone who will drop $1000 on a video card. $1K is way out of line with what high-quality consumer video cards go for in recent years, so you have to be someone who spends to say they spent, or someone mining one of the bitcoin alternatives in which case getting the card months earlier is a big benefit.
  • mlambert890 - Thursday, May 23, 2013 - link

    I have 3 Titans and don't regret them at all. While I wouldn't say $350 is "pocket change" (or in this case $1050 since its x3), it also is a price Im willing to pay for twice the VRAM and more perf. With performance at this level "close" doesn't count honestly if you are looking for the *highest* performance possible. Gaming in 3D surround even 3xTitan actually *still* isn't fast enough, so no way I'd have been happy with 3x780s for $1000 less.
  • ambientblue - Thursday, August 8, 2013 - link

    you are a sucker if you are willing to pay so much for twice the vram and 10% performance over the 780... if you got your titans before the 780 was released then sure its a massive performance boost over 680s but that's because the 680s should have been cheaper and named 660, and titan should have cost the amount the 680 was going for. You wont be so satisfied when the GTX 880 comes out and obliterates your titan at half the cost. THen again with that kind of money youll probably just buy 3 of those.
  • B3an - Thursday, May 23, 2013 - link

    I'd REALLY like to see more than just 3GB on high end cards. It's not acceptable. With the upcoming consoles having 8GB (with atleast 5GB+ usable for games) then even by the end of this year we may start seeing current high-end PC GPU's struggling due to lack of graphics RAM. These console games will have super high res textures, and when ported to PC, 3GB graphics RAM will not cut it at high res. I also have 2560x1600 monitors, and theres no way X1/PS4 games are going to run at this res with just 3GB. Yet the whole point of a high-end card is for this type of res as it's wasted on 1080p crap.

    Not enough graphics RAM was also a problem years ago on high-end GPU's. I remember having a 7950 G2X with only 512MB (1GB total but 512MB for each GPU) and it would get completely crippled (single digit FPS) from running games at 2560x1600 or even 1080p. Once you hit the RAM limit things literally become a slideshow. I can see this happening again just a few months from now, but to pretty much EVERYONE who doesn't have a Titan with 6GB.

    So i'm basically warning people thinking of buy a high-end card at this point - you seriously need to keep in mind that just a few months from now it could be struggling due to lack of graphics RAM. Either way, don't expect your purchase to last long, the RAM issue will definitely be a problem in the not too distant future (give it 18 months max).
  • Vayra - Thursday, May 23, 2013 - link

    How can you be worried about the console developments, and especially when it comes to VRAM of all things, when even the next-gen consoles are now looking to be no more than 'on-par' with todays' PC performance in games. I mean, the PS4 is just a glorified midrange GPU in all respects and so is the X1 even though they treat things a bit differently, not using GDDR5. Even the 'awesome' Killzone and CoD Ghost trailers show dozens of super-low-res textures and areas completely greyed out so as not to consume performance. All we get with the new consoles is that finally, 2011's 'current-gen' DX11 tech is coming to the console @ 1080p. But both machines will be running on that 8GB as their TOTAL RAM, and will be using it for all their tasks. Do you really think any game is going to eat up 5 Gigs of VRAM on 1080p? Even Crysis 3 on PC does not do that on its highest settings (it just peaks at/over 3gb I believe?) at 1440p.

    Currently the only reason to own a gpu or system with over 2 GB of VRAM is because you play at ultra settings at a reso over 1080p. For 1080p, which is what 'this-gen' consoles are being built for (sadly...) 2 GB is still sufficient and 3 GB is providing headroom.

    Hey, and last but not least, Nvidia has to give us at least ONE reason to still buy those hopelessly priced Titans off them, right?

    Also, aftermarket versions of the 780 will of course be able to feature more VRAM as we have seen with previous generations on both camps. I'm 100% certain we will be seeing 4 GB versions soon.
  • B3an - Friday, May 24, 2013 - link

    The power of a consoles GPU has nothing to do with it. Obviously these consoles will not match a high-end PC, but why would they have to in order to use more VRAM?! Nothing is stopping a mid-range or even a low-end PC GPU from using 4GB VRAM if it wanted to. Same with consoles. And obviously they will not use all 8GB for games (as i pointed out) but we're probably looking at atleast 4 - 5GB going towards games. The Xbox One for example is meant to use up to 3GB for the OS and other stuff, the remaining 5GB is totally available to games (or it's looking like that). Both the X1 and PS4 also have unified memory, meaning the GPU can use as much as it wants that isn't available to the OS.

    Crysis 3 is a bad example because this game is designed with ancient 8 year old console hardware in mind so it's crippled from the start even if it looks better on PC. When we start seeing X1/PS4 ports to PC the VRAM usage will definitely jump up because textures WILL be higher res and other things WILL be more complex (level design, physics, enemy A.I and so on). Infact the original Crysis actually has bigger open areas and better physics (explosions, mowing down trees) than Crysis 3 because it was totally designed for PC at the time. This stuff was removed in Crysis 3 because they had to make it play exactly the same across all platforms.

    I really think games will eat up 4+GB of VRAM within the next 18 months, especially at 2560x1600 and higher, and atleast use over 3GB at 1080p. The consoles have been holding PC's back for a very very long time. Even console ports made for ancient console hardware with 512MB VRAM can already use over 2GB on the PC version with enough AA + AF at 2560x1600. So thats just 1GB VRAM left on a 3GB card, and 1GB is easily gone by just doubling texture resolution.
  • Akrovah - Thursday, May 23, 2013 - link

    You're forgetting that on these new consoles that 8GB is TOTAL system memory, not just the video RAM. While on a PC you have the 3GB of VRAM here plus the main system memory (probably around 8 Gigs beign pretty stnadard at thsi point).

    I can guarantee you the consoles are not using that entire amount, or even the 5+ availabe for games, as VRAM. And this part is just me talking out of my bum, but I doubt many games on these consoles will use more than 2GB of teh unified memory for VRAM.

    Also I don;t think the res has much to do with the video memory any more. Some quick math and even if the game is tripple buffering a resolution of 2560x1600 only needs about 35 Megs of storage. Unless my math is wrong
    2560x1600 = 4,096,000 pixels at 24 bits each = 98,304,000 bits to store a single completed frame.
    divide by 8 = 12,288,000 bytes /1024 = 12,000 KiB / 1024 = 11.72 MiB per frame.

    Somehow I don't think modern graphic card's video memory has anythign to do with screen resolution and mostly is used by the texture data.
  • inighthawki - Thursday, May 23, 2013 - link

    Most back buffer swap chains are created with 32-bit formats, and even if they are not, chances are the hardware would convert this internally to a 32-bit format for performance to account for texture swizzling and alignment costs. Even so, a 2560x1600x32bpp back buffer would be 16MB, so you're looking at 32 or 48MB for double and triple buffering, respectively.

    But you are right, the vast majority of video memory usage will come from high resolution textures. A typical HD texture is already larger than a back buffer (2048x2048 is slightly larger than 2560x1600) and depending on the game engine may have a number of mip levels also loaded, so you can increase the costs by about 33%. (I say all of this assuming we are not using any form of texture compression just for the sake of example).

    I also hope anyone who buys a video card with large amounts of ram is also running 64-bit Windows :), otherwise their games can't maximize the use of the card's video memory.
  • Akrovah - Friday, May 24, 2013 - link

    I was under the impression that on a 32 bit rendering pipeline the upper 8 bits were used for transparancy calulation, but that it was then filtered down to 24 bits when actually written to the buffer because that's how displays take information.

    But then I just made that up in my own mind because I don't actually know how or when the 32-bit render - 24-bit display conversion takes place.

    But assuming I was wrong and what you say is correct (a likely scenario in this case) my previous point still stands.
  • jonjonjonj - Thursday, May 23, 2013 - link

    i wouldn't be worried. the lowend cpu and apu in consoles wont be pushing anything. the consoles are already outdated and they haven't even been launched. the consoles have 8GB TOTAL memory not 8GB of vram.
  • B3an - Friday, May 24, 2013 - link

    Again, the power of these consoles has absolutely nothing to do with how much VRAM they can use. If a low-end PC GPU existed with 4GB VRAM, it can easily use all that 4GB if it wanted to.

    And it's unified memory in these consoles. It all acts as VRAM. ALL of the 8GB is available to the GPU and games that isn't used by the OS (which is apparently 3GB on the Xbox One for OS/other tasks, leaving 5GB to games).
  • Akrovah - Friday, May 24, 2013 - link

    No, it doesn't all act as VRAM. You still have your data storage objects like all your variables (of which a game can have thousands) AI objects, pathfinding data, all the corodiantes for everything in the current level/map/whatever. Basically the entire state of the game that is operating behind the scenes. This is not insignifigant.

    All the non OS used RAM is available to the games yes, but games are storing a hell of alot more data than what is typically stored in video RAM. Hence PC games that need 2 GB of RAM also oly require 512 Megs of VRAM.
  • Akrovah - Friday, May 24, 2013 - link

    Oh yeah, forgot audio data, all of which gets stored in main RAM. And THAT will take up a pretty nice chunk of space righ there.
  • Sivar - Thursday, May 23, 2013 - link

    You realize, of course, that the 8GB RAM in consoles is 8GB *TOTAL* RAM, whose capacity and bandwidth must be shared for video tasks, the OS, and shuffling the game's data files.

    A PC with a 3GB video card can use that 3GB exclusively for textures and other video card stuff.
  • B3an - Friday, May 24, 2013 - link

    See my comment above.
  • DanNeely - Thursday, May 23, 2013 - link

    Right now all we've got is the reference card being rebadged by a half dozenish companies. Give it a few weeks or a month and I'm certain someone will start selling a 6GB model. People gaming at 2560 or on 3 monitor setups might benefit from the wait; people who just want to crank AA at 1080p or even just be able to always play at max instead of fiddling with settings (and there're a lot more of them than there are of us) have no real reason to wait. Also, in 12 months Maxwell will be out and with the power of a die shrink behind it the 860 will probably be able to match what the 780 does anyway.
  • DanNeely - Tuesday, May 28, 2013 - link

    On HardOCP's forum I've read that nVidia's told it's partners they shouldn't make a 6GB variant of the 780 (presumably to protect Titan sales). While it's possible one of them might do so anyway; getting nVidia mad at them isn't a good business strategy so it's doubtful any will.
  • tipoo - Thursday, May 23, 2013 - link

    If a slightly cut down Titan is their solution for the higher end 700 series card, I wonder what else the series will be like? Will everything just plop down a price category, the 680 in the 670s price point, etc? That would be uninteresting, but reasonable I guess, given how much power Kepler has on tap. And it wouldn't do much for mobile.
  • DigitalFreak - Thursday, May 23, 2013 - link

    The 770 will be identical to the 680, but with a slightly faster clock speed. I believe the same will be true with the 760 / 670. Those cards are probably still under NDA, which is why they weren't mentioned.
  • chizow - Thursday, May 23, 2013 - link

    Yep 770 at least is supposed to launch a week from today, 5/30. Satisfy demand from the top-down and grab a few impulse buyers who can't wait along the way.
  • yannigr - Thursday, May 23, 2013 - link

    No free games. With an AMD card you also get many AAA games. So Nvidia is a little more expensive than just +$200 compared with 7970GE.
    I am expecting reviewers someday to stop ignoring game bundles because they come from AMD. We are not talking for one or two games here, for old games, or demos. We are talking about MONEY. 6-7-8-9-10 free AAA titles are MONEY.
  • Tuvok86 - Thursday, May 23, 2013 - link

    I believe nVidia has bundles as well
  • Stuka87 - Thursday, May 23, 2013 - link

    Not for the 780 they don't. There are some cards that have Duke Nukem Forever (lol) and some that have Metro: Last Light. But nothing like what AMD offers.
  • chizow - Thursday, May 23, 2013 - link

    Metro was not listed when I checked today, EVGA in the past has done their customers right by sending them codes, but not always, depends how tightly Nvidia controls the promotion.
  • jonjonjonj - Thursday, May 23, 2013 - link

    i want free games. wah wah wah. screw free games. i would rather a cheaper more competitively price card then get some game i don't even want.
  • Homeles - Thursday, May 23, 2013 - link

    I really like the "Delta Performances" method of comparing the smoothness between each card.
  • hero1 - Thursday, May 23, 2013 - link

    This is one awesome card. I really don't think that AMD can compete with Nvidia atm but I would like to see what they have to offer. I want this card but I will be buying the custom cooled ones due to better cooling. Thanks for an awesome review, I have been waiting patiently, not!
  • formulav8 - Friday, May 24, 2013 - link

    I haven't seen anything even close to dire for AMD that they should all that worried about. Unless I missed something?
  • mayankleoboy1 - Thursday, May 23, 2013 - link

    $649 ?
    Move along, nothing to do here.
  • EzioAs - Thursday, May 23, 2013 - link

    That was my original plan, but I ended up reading the article. Ahh, curiosity...
  • Spoelie - Thursday, May 23, 2013 - link

    The paragraph on SMXs and GPCs is confusing without the information available from the original Titan piece. One has to remember or infer that Titan already had an SMX disabled, and that in fact GK110 has 15 SMXs built in. This conclusion is also non-obvious because the text before discussed there were no disabled functional units in *most* product tiers.

    Otherwise the reduction from 14 SMXs to 12 SMXs would imply 2 disabled clusters, not 3.
  • chaosbloodterfly - Thursday, May 23, 2013 - link

    Waiting to see what AMD has for the 8970. Hopefully they don't do what nVidia did for the 680 and release something with barely better performance with almost zero pricing pressure 6 months later.

    I want something worth liquid cooling damn it!
  • aidivn - Thursday, May 23, 2013 - link

    so, how many Double Precision units are there in each SMX unit of gtx780? titan had 64 dp units in each of their SMX units which totaled to 896 dp units

    And can u turn them on or off from the forcewre driver menu like “CUDA – Double Precision” for gtx780?
  • Ryan Smith - Thursday, May 23, 2013 - link

    Hardware wise this is GK110, so the 64 DP units are there. But most of them would be disabled to get the 1/24 FP64 rate.
  • aidivn - Friday, May 24, 2013 - link

    so how many are disabled and how many are enabled (numbers please)?
  • Ryan Smith - Friday, May 24, 2013 - link

    You would have only 1/8th enabled. So 8 per SMX are enabled, while the other 56 are disabled.
  • aidivn - Saturday, May 25, 2013 - link

    so, the GTX780 only has 96 DP units enabled while the GTX TITAN has 896 DP units enabled...thats a huge cut on double precision
  • DanNeely - Sunday, May 26, 2013 - link

    That surprised me too. Previously the cards based on the G*100/110 cards were 1/8; this is a major hit vs the 580/480/280 series cards.
  • Old_Fogie_Late_Bloomer - Thursday, May 23, 2013 - link

    "GTX 780 on the other hand is a pure gaming/consumer part like the rest of the GeForce lineup, meaning NVIDIA has stripped it of Titan’s marquee compute feature: uncapped double precision (FP64) performance. As a result GTX 780 can offer 90% of GTX Titan’s gaming performance, but it can only offer a fraction of GTX Titan’s FP64 compute performance, topping out at 1/24th FP32 performance rather than 1/3rd like Titan."

    Seriously, this is just...it's asinine. Utterly asinine.
  • tipoo - Thursday, May 23, 2013 - link

    Market segmentation is nothing new. The Titan really is a steal if you need DP, the next card up is 2400 dollars.
  • Old_Fogie_Late_Bloomer - Thursday, May 23, 2013 - link

    I'm well aware of the existence of market segmentation, but this is just ridiculous. Putting ECC RAM on professional cards is segmentation. Disabling otherwise functional features of hardware, most likely in the software drivers...that's just...ugh.
  • SymphonyX7 - Thursday, May 23, 2013 - link

    I just noticed that the Radeon HD 7970 Ghz Edition has been trouncing the GTX 680 in most of the benchmarks and trailing the GTX 680 in those benchmarks that traditionally favored Kepler. What the heck just happened? Didn't the review of the Radeon HD 7970 Ghz Edition say that it was basically tied with the GTX 680?
  • SymphonyX7 - Thursday, May 23, 2013 - link

    *mildly/narrowly trailing the GTX 680
  • chizow - Thursday, May 23, 2013 - link

    AMD released some significant driver updates in ~Oct 2012, branded "Never Settle" drivers that did boost GCN performance significantly, ~10-20% in some cases where they were clearly deficient relative to Nvidia parts. It was enough to make up the difference in a lot of cases or extend the lead to where the GE is generally faster than the 680.

    On the flipside, some of AMD's performance claims, particularly with CF have come under fire due to concerns about microstutter and frame latency, ie. the ongoing runtframe saga.
  • Vayra - Thursday, May 23, 2013 - link

    Drivers possibly?
  • kallogan - Thursday, May 23, 2013 - link

    High end overpriced gpu again ! Next !
  • wumpus - Thursday, May 23, 2013 - link

    Except that the 780 is nothing more than a Titan with even more cuda performance disabled. Presumably, they are expecting to get Titan sales to people interested in GPU computing, if only for geeklust/boasting.
  • wumpus - Thursday, May 23, 2013 - link

    My above comment was supposed to be a reply. Ignore/delete if possible.
  • ifrit39 - Thursday, May 23, 2013 - link

    Shadow Play is the most interesting news here. It costs a not-insignificant amount of money to buy a decent capture card that will record HD video. This is a great alternative as it requires no extra hardware and has little CPU/GPU overhead. Anything that ends up on the net will be compressed by youtube or other service anyway. I can't wait to remove fraps and install shadow play.
  • ahamling27 - Saturday, May 25, 2013 - link

    Fraps isn't the best, but they somehow have the market cornered. Look up Bandicam, I use it exclusively and I get great captures at a fraction the size. Plus they aren't cut up into 4 gig files. It has at least 15x more customization like putting watermarks in your capture or if you do like to segment your files you can have it do that at any size or time length you want. Plus you can record two sound sources at once, like your game and mic, or your game and whatever voice chat software you use.

    Anyway, I probably sound like I work for them now, but I can assure you I don't. This Shadow Play feature is definitely piquing my interest. If it's implemented wisely, it might just shut all the other software solutions down.
  • garadante - Thursday, May 23, 2013 - link

    There were two things that instantly made me dislike this card, much as I've liked Nvidia in the past: completely disabling the compute performance down to 600 series levels which was the reason I was more forgiving towards AMD in the 600/7000 series generation, and that they've priced this card at $650. If I remember correctly, the 680 was priced at $500-550 at launch, and that itself was hard to stomach as it was and still is widely believed GK104 was meant to be their mid-range chip. This 780 is more like what I imagined the 680 having been and if it launched at that price point, I'd be more forgiving.

    As it is... I'm very much rooting for AMD. I hope with these new hires, of which Anandtech even has an article of their new dream team or some such, that AMD can become competitive. Hopefully the experience developers get with their kind-of-funky architecture with the new consoles, however underwhelming they are, brings software on the PC both better multithreaded programming and performance, and better programming and performance to take advantage of AMD's module scheme. Intel and Nvidia both need some competition so we can get this computer hardware industry a bit less stagnated and better for the consumer.
  • EJS1980 - Tuesday, May 28, 2013 - link

    The 680 was $500 at launch, and was the main reason why AMD received so much flak for their 7970 pricing. At the time it launched, the 680 blew the 7970 away in terms of gaming performance, which was thee reason AMD had to respond with across the board price drops on the 7950/70, even though it took them a few months.
  • just4U - Thursday, May 23, 2013 - link

    I love the fact that their using the cooler they used for the Titan. While I plan to wait (no need to upgrade right now) I'd like to see more of that.. It's a feature I'd pay for from both Nvidia and Amd.
  • HalloweenJack - Thursday, May 23, 2013 - link

    no compute with the GTX 780 - the DP is similar to a GTX 480 and way way down on a 7970. no folding on these then
  • BiffaZ - Friday, May 24, 2013 - link

    Folding doesn't use DP currently, its SP, same for most @home type compute apps, the main exclusion being Milkyway@Home which needs DP alot.
  • boe - Thursday, May 23, 2013 - link

    Bring on the DirectCU version and I'll order 2 today!
  • slickr - Thursday, May 23, 2013 - link

    At $650 its way too expensive. Two years ago this card would have been $500 at launch and within 4-5 months it would have been $400 with the slower cut down version at $300 and mid range cards $200.

    I hope people aren't stupid to buy this overpriced card that only brings about 5fps more than AMD top end single card.
  • chizow - Thursday, May 23, 2013 - link

    I think if it launched last year, it's price would have been more justified, but Nvidia sat on it for a year while they propped up mid-range GK104 as flagship. Very disappointing.

    Measured on it's own merits, GTX 780 is very impressive and probably worth the increase over previous flagship price points. For example, it's generally 80% faster than GTX 580, almost 100% faster than GTX 480, it's predecessors. In the past the increase might only be ~60-75% and improve some with driver gains. It also adds some bling and improvements with the cooler.

    It's just too late imo for Nvidia to ask those kinds of prices, especially after lying to their fanbase about GK104 always slotted as Kepler flagship.
  • JPForums - Thursday, May 23, 2013 - link

    I love what you are doing with frame time deltas. Some sites don't quite seem to understand that you can maintain low maximum frame times while still introducing stutter (especially in the simulation time counter) by having large deltas between frames. In the worst case, your simulation time can slow down (or speed up) while your frame time moves back in the opposite direction exaggerating the result.

    Admittedly I may be misunderstanding your method as I'm much more accustomed to seeing algebraic equations describing the method, but assuming I get it, I'd like to suggest further modification to you method to deal with performance swings that occur expectedly (transition to/from cut-scenes, arrival/departure of graphically intense elements, etc.). Rather than compare the average of the delta between frames against an average frame time across the entire run, you could compare instantaneous frame time against a sliding window average. The window could be large for games with consistent performance and smaller for games with mood swings. Using percentages when comparing against the average frame times for the entire run can result in situations where two graphics solutions with the exact same deltas would show the one with better performance having worse deltas. As an example, take any video cards frame time graph and subtract 5ms from each frame time and compare the two resulting delta percentages. A sliding window accounts for natural performance deviations while still giving a baseline to compare frame times swings from. If you are dead set on percentages, you can take them from there as the delta percentages from local frame time averages are more relevant than the delta percentage from the runs overall average. Given my love of number manipulation, though, I'd still prefer to see the absolute frame time difference from the sliding window average. It would make it much easier for me to see whether the difference to the windowed average is large (lets say >15ms) or small (say <4ms). Of course, while I'm being demanding, it would be nice to get an xls, csv, or some other format of file with the absolute frame times so I can run whatever graph I want to see myself. I won't hold my breath. Take some of my suggestions, all of them, or none of them. I'm just happy to see where things are going.
  • Arnulf - Thursday, May 23, 2013 - link

    The correct metric for this comparison would be die size (area) and complexity of manufacturing rather than the number of transistors.

    RAM modules contain far more transistors (at least a couple of transistors per bit, with common 4 GB = 32 Gb = 64+ billion transistors per stick modules selling for less than $30 on Newegg), yet cost peanuts compared to this overpriced abomination that is 780.
  • marc1000 - Thursday, May 23, 2013 - link

    and GTX 760 ??? what will it be? will it be $200??

    or maybe the 660 will be rebranded as 750 and go to $150??
  • kilkennycat - Thursday, May 23, 2013 - link

    Fyi: eVGA offers "Superclocked" versions of the GTX780 with either a eVGA-designed "ACX" dual-open-fan cooler, or the nVidia-designed "titan"blower. Both at $659 are ~ $10 more than the default-speed version. The overclocks are quite substantial, 941MHz base, 993MHz boost (vs default 863/902) for the "titan" blower version, 967/1020 for the ACX-cooler version. The ACX cooler is likely to be more noisy than the "titan", plus it will dump some exhaust heat back into the computer case. Both of these eVGa Superclocked types were available for a short time on Newegg this morning, now "Auto Notify" :-( :-(
  • littlebitstrouds - Thursday, May 23, 2013 - link

    Being a system builder for video editors, I'd love to get some video rendering performance numbers.
  • TheRealArdrid - Thursday, May 23, 2013 - link

    The performance numbers on Far Cry 3 really show just how poorly Crysis was coded. There's no reason why new top-end hardware should still struggle on a 6 year old game.
  • zella05 - Thursday, May 23, 2013 - link

    Just no. crysis looks way better than farcry 3. dont forget, crysis is a pc game, farcry is a console port
  • Ryan Smith - Thursday, May 23, 2013 - link

    On a side note, I like Far Cry 3, but I'd caution against using it as a baseline for a well forming game. It's an unusually fussy game. We have to disable HT to make it behave, and the frame pacing even on single GPU cards is more variable than what we see in most other games.
  • zella05 - Thursday, May 23, 2013 - link

    there has to be something wrong with your testing? how on earth can 2560x1440 only shave 1fps of all those cards? impossible. I have dual 580s on a dell 1440p monitor and I can say with complete conviction that when playing Crysis 3 you lose at LEAST 10% frame rate. Explain yourselves?
  • WeaselITB - Thursday, May 23, 2013 - link

    There are two 1080p graphs -- one "High Quality" and one "Very High Quality" ... the 1440p graph is "High Quality."
    Comparing HQ between the two gives 79.4 to 53.1 for the 780 ... seems about right to me.

    -Weasel
  • BrightCandle - Thursday, May 23, 2013 - link

    Both of your measures taken from FCAT have issues which I will try to explain below.

    1) The issue with the 95% point

    If we take a game where 5% of the frames are being produced very inconsistently then the 95% point wont capture the issue. But worse is the fact that a 1 in 100 frame that takes twice as long is very noticeable when playing to everyone. Just 1% of the frames having an issue is enough to see a noticeable problem. Our eyes don't work by taking 95% of the frames, our eyes require a level of consistency on all frames. Thus the 95% point is not the eqvuialent of minimum FPS, that would be the 100% point. The 95% point is arbitary and ultimately not based on how we perceive the smoothness of frames. It captures AMDs current crossfire issue but it fails to have the resolution necessary as a metric to capture the general problem and compare single cards.

    2) The issue with the delta averaging

    By comparing to the average frame time this method would incorrectly categorise clearly better performing cards. Its the same mistake Tomshardware made. In essence if you have a game and sometimes that game is CPU limited (common) and then GPU limited the two graphics cards will show similar frame rates at some moments and the faster of them will show dramatically higher performance at other times. This makes the swing from the minimum/average to the high fps much wider. But it could be a perfectly consistent experience in the sense that frame to frame for the most part the variation is minimal. Your calculation would tell us the variation of the faster card was a problem, when actually it wasn't.

    The reason that measure isn't right is that it fails to recognise the thing we humans see as a problem. We have issue with individual frames that take a long time. We also have issues with inconsistent delivery of animation in patterns. If we take 45 fps for example the 16/32/16/32 pattern that can produce in vsync is highly noticeable. The issue is that frame to frame we are seeing variation. This is why all the other review sites show the frame times, because the stuttering on a frame by frame basis really matters.

    We don't particularly have issues with a single momentary jump up or down in frame rate, we might notice them but its momentary and then we adapt rapidly. What our brains do not adapt to rapidly is continuous patterns of odd delivery of frames. Thus any measure where you try to reduce the amount of data needs to be based on that moment by moment variation between individual or small numbers of frames, because big jumps up and down in fps that last for 10s of seconds are not a problem, the issue is the 10ms swing between two individual frames that keeps happening. You could look for patterns, you could use signal frequency analysis and various other techniques to tune out the "carrier" signal of the underlying FPS. But what you can't do is compare it to the average, that just blurs the entire picture. A game that started at 30 fps for half the trace and then was 60 fps for half the trace with no other variation is vastly better than one that continuously oscillates between 30 and 60 fps every other frame.

    Its also important to understand that you analysis is missing fraps. Fraps isn't necessarily good for measuring what the cards are doing but it is essentially the best current way to measure what the game engine is doing. The GPU is impacting on the game simulation and its timing and variation in this affects what goes into the frames. So while FCAT captures if the frames come out smoothly it does not tell us anything about whether the contents is at the right time, fraps is what does that. NVidia is downplaying that tool because they have FCAT and are trying to show off their frame metering and AMD is downplaying it because their cards have issues but it is still a crucial measure. The ideal picture is both that the fraps times are consistent and the FCAT measures are consistent, they after all measure the input into the GPU and the output and we need both to get a true picture of the sub component.

    Thus I am of the opinion your data doesn't currently show what you thought it did and your analysis needs work.
  • rscsrAT - Thursday, May 23, 2013 - link

    As far as I understood the delta averaging, it adds the time difference between two adjacent frames.
    To make it clear, if you have 6 frames with 16/32/16/32/16/32ms per frame, you would calculate the value with (5*16)/((3*16+3*32)/6)=333%.
    But if you have 6 framse with 16/16/16/32/32/32ms per frame, you would have 16/((3*16+3*32)/6)=67%.
    Therefore you still have a higher value for a higher fluctuating framerate than with a steady framerate.
  • WeaselITB - Thursday, May 23, 2013 - link

    For your #1 -- 95th percentile is a pretty common statistical analysis tool http://en.wikipedia.org/wiki/68-95-99.7_rule ... I'm assuming that they're assuming a normal distribution, which intuitively makes sense given that you'd expect most results to be close to the mean. I'd be interested in seeing the 3-sigma values, as that would further point out the extreme outliers, and would probably satisfy your desire for the "1%" as well.

    For your #2 -- they're measuring what you're describing, the differences between individual frametimes. Compare their graphs on the "Our First FCAT" page between the line graph of the frametimes of the cards and the bar graph after they massaged the data. The 7970GE has the smallest delta percentage, and the tightest line graph. The 7990 has the largest delta percentage (by far), and the line graph is all over the place. Their methodology of coming up with the "delta percentage" difference is sound.

    -Weasel
  • jonjonjonj - Thursday, May 23, 2013 - link

    amd get your act together so we have some competition. i really don't even see the point to this card at this price. what are they going to do for the 770? sell and even more crippled GK110 for $550? and the 760ti will be $450? or are they just going to sell the the 680 as a 770?
  • formulav8 - Friday, May 24, 2013 - link

    AMD is competing just fine unless I looked over something? It seems this card and price wouldn't have AMD worried.
  • i_max2k2 - Thursday, May 23, 2013 - link

    This is a big request, but could we please, please add GTX 580 SLI results in there too. I have that setup and I feel, its more then enough for something like a Full HD to 2560x1600 resolution monitor, for most recent games.
  • gonks - Thursday, May 23, 2013 - link

    What about bitcoin mining? it's the same as the GTX TITAN?
  • Daeros - Thursday, May 23, 2013 - link

    "...of a casted aluminum..."

    Seriously?! Casted? That made it through proof and spell-check and anyone over the age of six?
  • Ryan Smith - Thursday, May 23, 2013 - link

    Noted and fixed. Thank you. And I'd note at 5am in the morning, a six year old is about where my mental capacity is at...
  • Daeros - Thursday, May 23, 2013 - link

    and I may have been a bit harsh, apologies
  • just4U - Friday, May 24, 2013 - link

    I understood what he meant... and sadly I am not sure why it's a spelling mistake.. (without checking haha..) Im 44! but I do lack grammer skills.
  • Ryan Smith - Friday, May 24, 2013 - link

    Nah, there's no need to apologize. You were spot-on.
  • Nfarce - Thursday, May 23, 2013 - link

    Okay so when are 680 prices going to drop so I can get a second one for my new 2560x1440 monitor?
  • Airjarhead - Thursday, May 23, 2013 - link

    When is the Gigabyte Windforce coming out? I can't find it on Newegg, TD, or Amazon.
    BTW, I was at work when the 780 hit the stores, so I probably missed it, but did Amazon have any 780's in stock? By the time I looked they said they were out of stock of every brand. Newegg still has reference boards available, but I want the Windforce or ACX cooler.
  • mac2j - Thursday, May 23, 2013 - link

    The problem with $650 vs $500 for this price point is this:

    I can get 2 x 7950s for <$600 - that's a setup that destroys a 780 for less money.

    Even if you're single-GPU limited $250 is a lot of extra cash for a relative small amount of performance gain.
  • Ytterbium - Thursday, May 23, 2013 - link

    I'm disappointed they decided to cut the compute to 1/24 vs 1/3 in Titan, AMD is much better value for compute tasks.
  • BiffaZ - Friday, May 24, 2013 - link

    Except much consumer (@home type) compute is SP not DP so it won't make much difference. SP performance is around equal or higher than AMD's in 780.
  • Nighyal - Thursday, May 23, 2013 - link

    I don't know if this is possible but it would be great to see a benchmark that showed power, noise and temperature at a standard work load. We can get an inferred idea of clock per watt performance but when you're measuring a whole system other factors come into play (you mentioned CPU loads scaling with increased GPU performance).

    My interest in this comes from living in a hot climate (Australia) where a computer can throw out a very noticeable amount of heat. The large majority of my usage is light gaming (LoL) but I occasionally play quite demanding single player titles which stretches the legs of my GPU. The amount of heat thrown out is directly proportional to power draw so to be able to clearly see how many less watts a system requires for a controlled work load would be a handy comparison for me.

    TL:DR - Please also measure temperature, noise and power at a controlled workload to isolate clock per watt performance.
  • BiggieShady - Friday, May 24, 2013 - link

    Kudos on the FCAT and the delta percentages metrics. So 32,2% for 7990 means that on average one frame is present 32,2% more time than the next. Still, it is only an average. Great extra info would be to show same metrics that averages only the deltas higher then the threshold delta, and display it on the graph with varying thresholds.
  • flexy - Friday, May 24, 2013 - link

    NV releases a card with a ridiculous price point of $1000. Then they castrate the exact same card and give it a new name, making it look like it's a "new card" and sell it cheaper than their way overpriced high end card. Which, of course, is a "big deal" (sarcasm) given the crazy price of Titan. So or so, I don't like what NV does, in the slightest.

    Many ages ago, people could buy *real* top of the line cards which always cost about $400-$500, today you pay $600 for "trash cards" which didn't make it into production for Titan due to sub-par chips. Nvidia:"Hey, let's just make-up a new card and sell those chips too, lols"

    Please AMD, help us!!
  • bds71 - Friday, May 24, 2013 - link

    for what it's worth, I would have like to have seen the 780 *truly* fill the gap between the 680 and titan by offering not only the gaming performance, but ALSO the compute performance - if they would have done a 1/6 or even 1/12!! to better fill the gap and round out the performance all around I would HAPPILY pay 650 for this card. as it is, I already have a 690, so I will simply get another for 4k gaming - but a comparison between 3x 780's and 2 690's (both very close to $2k) at 8Mpixels+ resolution would be extremely interesting. note: 3x 30" monitors could easily be configured for 4800x2560 resolution via NVidia surround or eyefinity - and I, for one, would love to see THAT review!!
  • flexy - Friday, May 24, 2013 - link

    Well compute performance is the other thing, along with their questionable GPU throttle aka "boost" (yeah right) technology. Paying premium for such a card and then weak compute performance in exchange compared to older gen cards or the AMD offerings... Seriously, there is a lot to not like about Kepler, at least from an enthusiast point of view. I hope that NV doesn't continue that route in the future with their cards becoming less attractive while prices go up.
  • EJS1980 - Wednesday, May 29, 2013 - link

    Cynical much?
  • ChefJeff789 - Friday, May 24, 2013 - link

    Glad to see the significant upgrade. I just hope that AMD forces the prices back down again soon. I hope the AMD release "at the end of the year" is closer to September than December. It'll be interesting to see how they stack up. BTW, I have shied away from AMD cards ever since I owned an X800 and had SERIOUS issues with the catalyst drivers (constant blue-screens, had to do a Windows clean-install to even get the card working for longer than a few minutes). I know this was a long time ago, and I've heard from numerous people that they're better now. Is this true?
  • milkman001 - Friday, May 24, 2013 - link

    FYI,

    On the "Total War: Shogun 2" page, you have the 2650x1440 graph posted twice.
  • JDG1980 - Saturday, May 25, 2013 - link

    I don't think that the release of this card itself is problematic for Titan owners - everyone knows that GPU vendors start at the top and work their way down with new silicon, so this shouldn't have come as much of a surprise.

    What I do find problematic is their refusal to push out BIOS-based fan controller improvements to Titan owners. *That* comes off as a slap in the face. Someone spends $1000 on a new video card, they deserve top-notch service and updates.
  • inighthawki - Saturday, May 25, 2013 - link

    The typically swapchain format is something like R8G8B8A8 and the alpha channel is typically ignored (value of 0xFF typically written), since it is of no use to the OS, since it will not alpha blend with the rest of the GUI. You can create a 24-bit format, but it's very likely that for performance reasons, the driver will allocate it as if it were a 32-bit format, and not write to the upper 8 bits. The hardware is often only capable of writing to 32 bit aligned places, so its more beneficial for the hardware to just waste 8 bits of data and not have to do any fancy shifting to read or write from each pixel. I've actually seen cases where some drivers will allocate 8-bit formats as 32-bit formats, wasting 4x the space the user thought they were allocating.
  • jeremyshaw - Saturday, May 25, 2013 - link

    As a current GTX580 owner running at 2560x1440, I don't have any want of upgrade, especially in compute performance. I think I'll hold out for at least one more generation, before deciding.
  • ahamling27 - Saturday, May 25, 2013 - link

    As a GTX 560 Ti owner, I am chomping at the bit to pick up an upgrade. The Titan was out of the question, but the 780 looks a lot better at 65% of the cost for 90% of the performance. The only thing holding me back is that I'm still on z67 with a 2600k overclocked to 4.5 ghz. I don't see a need to rebuild my entire system as it's almost on par with the z77/3770. The problem is that I'm still on PCIe 2.0 and I'm worried that it would bottleneck a 780.

    Considering a 780 is aimed at us with 5xx or lower cards, it doesn't make sense if we have to abandon our platform just to upgrade our graphics card. So could you maybe test a 780 on PCIe 2.0 vs 3.0 and let us know if it's going to bottleneck on 2.0?
  • Ogdin - Sunday, May 26, 2013 - link

    There will be no bottleneck.
  • mapesdhs - Sunday, May 26, 2013 - link


    Ogdin is right, it shouldn't be a bottleneck. And with a decent air cooler, you ought to be
    able to get your 2600K to 5.0, so you have some headroom there aswell.

    Lastly, you say you have a GTX 560 Ti. Are you aware that adding a 2nd card will give
    performance akin to a GTX 670? And two 560 Tis oc'd is better than a stock 680 (VRAM
    capacity not withstanding, ie. I'm assuming you have a 1GB card). Here's my 560Ti SLI
    at stock:

    http://www.3dmark.com/3dm11/6035982

    and oc'd:

    http://www.3dmark.com/3dm11/6037434

    So, if you don't want the expense of an all new card for a while at the cost level of a 780,
    but do want some extra performance in the meantime, then just get a 2nd 560Ti (good
    prices on eBay these days), it will run very nicely indeed. My two Tis were only 177 UKP
    total - less than half the cost of a 680, though atm I just run them at stock speed, don't
    need the extra from an oc. The only caveat is VRAM, but that shouldn't be too much of
    an issue unless you're running at 2560x1440, etc.

    Ian.
  • ahamling27 - Wednesday, May 29, 2013 - link

    Thanks for the reply! I thought about SLI but ultimately the 1 GB of vram is really going to hurt going forward. I'm not going to grab a 780 right away, because I want to see what custom models come out in the next few weeks. Although, EVGA's ACX cooler looks nice, I just want to see some performance numbers before I bite the bullet.

    Thanks again!
  • inighthawki - Tuesday, May 28, 2013 - link

    Your comment is inaccurate. Just because a game requires "only 512MB" of video ram doesn't mean that's all it'll use. Video memory can be streamed in on the fly as needed off the hard drive, and as a result you can easily use a lot if you wanted as a performance optimization. I would not be the least bit surprised to see games on next gen consoles using WAY more video memory than regular memory. Running a game that "requires" 512MB of VRAM on a GPU with 4GB of VRAM gives it 3.5GB more storage to cache higher resolution textures.
  • AmericanZ28 - Tuesday, May 28, 2013 - link

    NVIDIA=FAIL....AGAIN! 780 Performs on par with a 7970GE, yet the GE costs $100 LESS than the 680, and $250 LESS than the 780.
  • EJS1980 - Wednesday, May 29, 2013 - link

    If by on par you mean wayyyyyyy better, than yeah.
  • PatriciaBau42 - Wednesday, May 29, 2013 - link

    upto I saw the paycheck four $8176, I be certain that my father in law had been realey bringing home money part-time at their computer.. there uncles cousin has been doing this 4 only 22 months and resently cleard the mortgage on their villa and bourt a gorgeous Citroλn 2CV. this is where I went, Exit35.comCHECK IT OUT
  • An00bis - Friday, May 31, 2013 - link

    I really appreciate that the guy who made this article showed us the texel and pixel fill rate in the synthetic benchmarks part of the review, instead of doing what me, and apparently inexperienced guy that isn't hired by one of the biggest tech reviewing websites on the internet and clearly lacks knowledge in this field, would do, which is, INCLUDE RESULTS MADE WITH 3DMARK11 IN PERFORMANCE PRESET (please don't include scores made in extreme preset, not everyone pays like 1000$ to get a license for a benchmark program, have some common sense) or in another popular benchmarking program, like Heaven, or maybe include the normal score for 3dmark vantage (even though nobody uses that anymore, even gpus from the amd 5XXX and nvidia 4XX series had support for dx11).

    I don't want to be rude, you're supposed to do your job, but when a stranger with no experience can do it better at least have the decency of actually caring about what to do, you do get paid for doing this, right?
  • yeeeeman - Sunday, June 9, 2013 - link

    Get a HD7970 and spend those 200$(650-450) on a vacation...
  • Mithan - Friday, June 14, 2013 - link

    Finally, a worthy upgrade for my GTX580.
    Unfortunately, it is priced too much.

    If it was $500, I would probably order one now and be happy for the next 3 years or so, but I will wait for Maxwell.

Log in

Don't have an account? Sign up now