Comments Locked

234 Comments

Back to Article

  • Cakefish - Saturday, May 7, 2016 - link

    I eagerly await announcement of GTX 1080M. I guess the laptop version of Pascal will launch next month, following the trend that the GTX 900 series set.
  • hyno111 - Saturday, May 7, 2016 - link

    Likely a cut down version of 1070 to fit in 100w, with GDDR5X.
  • Notmyusualid - Saturday, May 7, 2016 - link

    We've been past the 100W barrier for quite some time now:

    GTX980 Notebook - 100+W (not exactly sure yet, reference design was 165W)
    GTX980M - 125W
    R9 M395X - 125W
    GTX880M - 104W

    Just a few examples for you.

    980Ms are soon to flood Fleabay, for the benefit of the likes of Meeeeeeeeeeeeeeee!
  • ImSpartacus - Saturday, May 7, 2016 - link

    Yeah, I don't know how that works. Mxm b is only supposed to be able to provide 100W of power, but oh well.
  • mczak - Saturday, May 7, 2016 - link

    Yes, but a version limited to 100W or so would still be very interesting. That might not be faster at all than the GTX 980 notebook version, but by far not every notebook can fit those super high TDP parts (and, even if you take one which could, it would mean less noise). Notebook GPUs are about 1. Efficiency, 2. Efficiency, 3. Efficiency.
  • Notmyusualid - Saturday, May 7, 2016 - link

    Pardon me if I disagree with you pal.

    I run my rig with my SLI turned on pretty much 24/7, i.e. even when not gaming. I care not a jot about the lost energy efficiency.

    Nobody really games on battery with a 100W+ mobile GPU anyway. And in fact you almost can't. They system enters some sort of low power mode that makes even old titles such as MW3 almost horrible to play. If the power goes off whilst I'm gaming (often), I'll just camp it out till it comes back on, as the whole experience is affected. (modem / router is on UPS).

    You can bet there will be a version at 100W like you mentioned. That 980 Notebook part wouldn't fit in my rig, it is not standard mxm 3.0b.
  • Namisecond - Sunday, May 8, 2016 - link

    It's not so much a matter of gaming on battery, it's a matter of cooling. It's not easy designing a system that dissipates 100W+ in a laptop form factor, especially those new thin and lights that can be under an inch total height.
  • Notmyusualid - Monday, May 9, 2016 - link

    Not to have the last word...

    But I don't think designing washing machines is easy either, but again, that doesn't affect me.

    My Alienware 18 can eat 330W of juice. They made it work, and they did it well. Some of us are not looking at thin / light / portable, as our nomadic existence doth not allow thy a Desktop. But that is the bed I made...
  • extide - Monday, May 9, 2016 - link

    Yeah dude, 330W for a SLI system IS the definition of efficiency. 330w compared to a desktop SLI rig is a TINY amount of power.
  • HollyDOL - Friday, May 13, 2016 - link

    Just with the slight note that desktop GTX 980 is about 40-50% more performant than 980M (ref. http://gpu.userbenchmark.com/Compare/Nvidia-GTX-98... ), it's hardly fair comparison... If you calculate Watts per Frame rendered (kind of), desktops won't come out of that duel anywhere near that bad, even though they'll still very likely lose by something on Watts per Frame rendered based score.
  • Notmyusualid - Saturday, May 7, 2016 - link

    Sheeeeeeeeeeeeit - you and 100,000 others like you, and me!
  • jjj - Saturday, May 7, 2016 - link

    Paper launch and the lack of clarity about specs, pricing and availability are never encouraging. The 1070 should be fine but there are different degrees of "fine".
    Feels like they are supply limited and they'll only have the "premium for no reason" versions for a while.
  • JoeMonco - Saturday, May 7, 2016 - link

    How is there a lack of clarity about pricing? They announced the exact prices of the products. What more clarification do you need?
  • tamalero - Monday, May 9, 2016 - link

    Id actually love to know why the hell it says "founders" with a higher price. Are the "founders" better deluxe version or what the hell?
  • extide - Monday, May 9, 2016 - link

    Again, it says in the article. They will have the reference cooler and PCB, the cheaper cards will be 3rd party and probably not available until a few weeks later.
  • vcarvega - Saturday, May 7, 2016 - link

    I may be missing something here... but you realize that they released specs, pricing, AND availability today right?
  • Jumangi - Saturday, May 7, 2016 - link

    Only for the 1080, not the 1070 for so we reason.
  • vcarvega - Saturday, May 7, 2016 - link

    I read on another site that the 1070 is expected to arrive in June... don't think I remember reading a price though. Of course, I skimmed over anything related to the 1070, since the 1080 is the card I'm interested in.
  • mrvco - Saturday, May 7, 2016 - link

    Per this very article: 1070 - $379 / $449, June 10, 2016
  • Patapon456 - Saturday, May 7, 2016 - link

    The GDDR5X barely made it out of there factory therefore they have to split the amount of GDDR5X used to a sort of minimal.
  • kaesden - Saturday, May 7, 2016 - link

    how is it a paper launch? they announced a release date. its only a paper launch if you can't buy the product on the release date. We won't know if its a paper launch until that date arrives.

    Why is hardware expected to be available the day its announced when NO other product is ever subject to this? Its only a paper launch if they say "available today" and then its not actually available. Come May 27th, if there's no product available, THEN it would be considered a paper launch.
  • Wardrop - Saturday, May 7, 2016 - link

    I don't get why companies shoot themselves in the foot with their naming schemes. I mean, Nvidia started this new naming scheme with the GTX 2xx series, completely skipping any 1xx series, and since then, have skipped many subsequent series, or wasted them by spreading low and high-end products over two or three series. Now they're left with having to go to go back to a four digit numbering scheme. Doesn't seem to make any sense.
  • nandnandnand - Saturday, May 7, 2016 - link

    GTX 1080 for the 1080p gamer!
  • ImSpartacus - Saturday, May 7, 2016 - link

    Ikr, I was surprised that Nvidia actually went through with this naming scheme for that exact reason.
  • vcarvega - Saturday, May 7, 2016 - link

    I guess they expect anyone buying these cards to be savvy enough to know that these are 4k beasts... I agree it's a bit odd, but none of us will be purchasing the cards for 1080P gaming. The people likely to be confused by the naming convention, also won't be buying dedicated video cards.
  • Donkey2008 - Saturday, May 7, 2016 - link

    According to Steam 1920x1080 is the most commonly used resolution, followed by 1366x768.
  • Meteor2 - Saturday, May 7, 2016 - link

    Once it was 640x480.
  • inighthawki - Saturday, May 7, 2016 - link

    I don't think he was in any way saying otherwise. He simply was pointing out that anyone willing to drop $600 on a new top of the line video card is probably smart enough to realize the 1080 in the name has nothing to do with resolution.

    The steam statistics also show that the GTX 970 only makes up 5% of the market, and the 780/980 and Ti variants are combined <3%. I would bet most of those users in the 3% are typically also running 4K, 1440p, eyefinity, etc.
  • Donkey2008 - Monday, May 9, 2016 - link

    "He simply was pointing out that anyone willing to drop $600 on a new top of the line video card is probably smart enough to realize the 1080 in the name has nothing to do with resolution."

    I would say you over-estimate the intelligence of people.
  • inighthawki - Monday, May 9, 2016 - link

    I would say that most people who have or are willing to spend $600 on a video card are either enthusiasts who know what they're doing, or the 12 year old kid whose parents pay for everything. In the latter case I would expect they buy it because it's "the newest" and not because they expect it to be for 1080p gaming.
  • tamalero - Monday, May 9, 2016 - link

    not surprising.. I mean its going to be quite a while before we see sub 400 gaming 4k monitors and sub 300 4k standard monitors.
  • sixto1972 - Saturday, May 7, 2016 - link

    They will be using it for 1080p VR gaming
  • LemmingOverlord - Saturday, May 7, 2016 - link

    Ikr?
  • soldier45 - Saturday, May 7, 2016 - link

    Pfft I stopped gaming at 1080p in 2008. Been at 4k for a year now. Stupid naming scheme.
  • Donkey2008 - Saturday, May 7, 2016 - link

    Pfft I stopped gaming on a computer and started running people over in my real car and shooting guns out of the window when I play GTA.
  • inighthawki - Saturday, May 7, 2016 - link

    The small minority of users that this card targets are well aware that the 1080 has nothing to do with 1080p resolution. Anyone who would seriously even begin to consider this card and think otherwise is extremely stupid. The naming scheme is not stupid at all.
  • WhisperingEye - Saturday, May 7, 2016 - link

    Are you seriously hung up a card called GTX 1080, because you dismiss 1080p gaming? The two are unrelated. One is a successive model number, the other is a vertical resolution. I would totally never live in Phoenix because the 480 area code reminds me of those pitiful old televisions. You must be autistic.
  • fanofanand - Monday, May 9, 2016 - link

    Your sarcasm is duly noted, but why bash on autistic people? Some autistic kids would put you to shame when it comes to math. Call the guy an idiot, a braggart, or some other valid insult, but let's leave autistic people out of this.
  • nikon133 - Thursday, May 12, 2016 - link

    I'm quite confident that people who game on 1070p res will be happy, too ;)
  • jasonelmore - Saturday, May 7, 2016 - link

    I dont get why people are dissing nvidia about the naming scheme. 1080 is the rational model number after the 980. In fact, the press has been using this name for over a year. you guys just need to find something else to pick at.
  • Donkey2008 - Saturday, May 7, 2016 - link

    "Y" should be capitalized at the start of your last sentence.
  • theduckofdeath - Monday, May 9, 2016 - link

    Complains about grammar. Too lazy to write proper sentences...
  • Holliday75 - Monday, May 9, 2016 - link

    I thought it was funny.
  • fanofanand - Monday, May 9, 2016 - link

    Sentences shouldn't begin with numbers either, while we are nitpicking. :P
  • FourEyedGeek - Saturday, May 14, 2016 - link

    To continue the lessons started by others:
    - You are also missing an apostrophe in don't
    - "Dissing" is not a real word
    - NVIDIA is capitalised
    - Use have instead of "has"

    Have we missed anything?
  • ajp_anton - Saturday, May 7, 2016 - link

    "the company has opted to pay special attention to CUDA Core efficiency with Pascal, improving the throughput of the architecture as opposed to adding a significant number of additional CUDA cores."
    "GP104 hitting a higher percentage of its theoretical throughput in practice."

    Looking at the number of cores, the clock speed and the performance, Pascal is actually less efficient per core than Maxwell.
  • nandnandnand - Saturday, May 7, 2016 - link

    My calculation using the same numbers says otherwise:

    (2560 cores * 1607 MHz) / (2048 cores * 1126 MHz) = 1.78
    9 teraflops / 5 teraflops = 1.8
  • ajp_anton - Saturday, May 7, 2016 - link

    980: 2048*1216*2 = 5.0 TFLOPs
    980Ti: 2816*1075*2 = 6.1 TFLOPs
    1080: 2560*1733*2 = 8.9 TFLOPs, 78% over 980, 46% over 980Ti

    Those are just the theoretical performance numbers, which Pascal doesn't seem to reach as well since Nvidia claims it only performs 65% over 980 and 25% over 980Ti.
  • nevcairiel - Saturday, May 7, 2016 - link

    NVIDIA didn't provide actual numbers, just the slides with the graph and the performance scale, interpreting that is up to us. They do say 9 TLFOPs for the 1080 though.
  • soldier45 - Saturday, May 7, 2016 - link

    A lot of us didn't bother with the 980/980Ti cards as the 780 series kept us around even when I moved to 4k last year. But now its finally time to upgrade so I'm guessing a 50% boost over my 780 classified in games...
  • Lolimaster - Saturday, May 7, 2016 - link

    I wouldn't touch a 320GB/s gpu for 4K and the upcoming games.
  • Lolimaster - Saturday, May 7, 2016 - link

    AMD and Nvidia put a pause for 4K till 2017.

    GP100 and Vega will have 1TB/s, thats a proper 4K gpu.
  • Yojimbo - Saturday, May 7, 2016 - link

    Oh you mean you measured their slide and are basing your performance numbers on that. Even if your measurements are accurate, I don't think that was meant to be used that way and it doesn't necessarily result in accurate information.
  • CiccioB - Sunday, May 8, 2016 - link

    Those numbers alone are meaningless, as 980Ti is not just 22% faster than 980.
    So total performance is given by a combination that include pure computation capabilities, but not that only (ROPs/TMUs/cache/bandwidth, all contribute).
  • Yojimbo - Saturday, May 7, 2016 - link

    But you are talking about theoretical throughput on a core by core basis, the part of the article you quote is quite clearly indicated to be talking about something else entirely: GP104 hitting a higher percentage of its theoretical throughput in practice.
  • ajp_anton - Saturday, May 7, 2016 - link

    No, I'm talking about how the theoretical performance went up more than the "practical performance" (according to Nvidia's own performance slide, which the article later interpreted the same way I did). Which is exactly the opposite of what you say: GP104 hits a *lower* percentage of its theoretical throughput in practice.
  • ajp_anton - Saturday, May 7, 2016 - link

    GTX 1080 has the hardware and clockspeeds to perform 35% better than the Titan X. Maybe the leaked performance test (25% better) is fake and based on Nvidia's slide, but it would be weird for Nvidia to tell the card performs worse than it really does. Time will tell, it's really pointless to speculate here. I was just reacting to how Anandtech talks about efficiency improvements when what little evidence there is so far says otherwise.
  • Yojimbo - Saturday, May 7, 2016 - link

    The thing is, NVIDIA weren't really telling anything as far accurate quantitative information. They were just making some visual representation on a slide for marketing purposes. You're trying to read into the slide more information than it was probably intended to convey.
  • boeush - Saturday, May 7, 2016 - link

    When discussing efficiency, I think they mean performance per watt - as opposed to sustained in practice vs theoretical maximum. Yes, they have more cores and higher clocks - but at the same time the power limit only went up 15W, which is far less than 25%. Sure, much of that is down to the new process (which is itself yet to be optimized), but at least some is probably architectural.

    Additionally, there is a bit of an issue with apples to oranges comparisons, since these cards support more/newer features, and it's unclear to what extent those features would contribute to higher projected efficiency - or to what extent they'll actually be used in practice by actual non-benchmark software, and/or on what time table.
  • Yojimbo - Saturday, May 7, 2016 - link

    Yeah that wasn't clear from your first post. I made my post and then afterwards read your reply to others who had replied to your original post. Ironically you made the same mistake I did, because I had already realized what you meant and replied to that before you posted this reply to me.
  • rhysiam - Saturday, May 7, 2016 - link

    I thought DP 1.4 involved a compression to achieve higher effective bit rates. That surely means dedicated encode & decode hardware. Is that all finalised and implemented here? Seems pretty quick to me.
  • willis936 - Saturday, May 7, 2016 - link

    I believe DSC is an optional feature. DP 1.4 has the same throughout as 1.3 (32.4 Gbps) which is enough for throughout necessary for uncompressed 4k120 24 bpp (23.9 Gbps). Going to 8k60 8bpp requires 48 Gbps uncompressed. Whether or not performing lossy compression for the extra pixels is worth it is actually a pretty hard information theory question. I have a hunch that it is worth it. Who has 8k displays though?
  • rhysiam - Saturday, May 7, 2016 - link

    DSC optional for DP 1.4? Wow, I hope not! It seems like the headline feature to me. Otherwise it's just DP 1.3 with some additional audio channels and colour modes. If I get an 8K display one day I don't want to have to check whether the graphics card happens to have the DSC required to drive it.

    I agree with you though... it's hardly a big deal. These will be the first mainstream DP 1.3 cards; a good boost over DP 1.2 and plenty for the time being. That's why I was surprised 1.4 even came up in the launch.
  • TristanSDX - Saturday, May 7, 2016 - link

    Nobody have 8K, but DP1.4 is much more usefull for 4K with HDR.
    Pascal don't have DP 1.4, but may may drive 8K @ 60 Hz with two cables.
  • jasonelmore - Saturday, May 7, 2016 - link

    it says pascal has 3x dp 1.4 in the spec sheet.
  • silverblue - Saturday, May 7, 2016 - link

    It looks like NVIDIA's approach with 16nm is to clock the buggery out of its chips whilst keeping them relatively narrow (compared to AMD, that is). Is AMD still opting for wide and conservatively clocked?
  • Lolimaster - Saturday, May 7, 2016 - link

    Polaris is still gunning for 900-1000Mhz with way higher efficiency.
  • Lolimaster - Saturday, May 7, 2016 - link

    I think they will unbench the kench with Vega.
  • lefty2 - Saturday, May 7, 2016 - link

    Rather unfortunate that Pascal has the same limitation as Maxwell: no asynchronous compute: http://www.bitsandchips.it/52-english-news/6785-ru...
  • Lakku - Saturday, May 7, 2016 - link

    Except it does have it, and that article you posted was rumor before this announcement.

    http://nvidianews.nvidia.com/news/a-quantum-leap-i...
  • lefty2 - Saturday, May 7, 2016 - link

    Hmm, that's interesting. Hopefully, they will run ashes of singularity benchmark when the card is released to see how well it works
  • steenss - Saturday, May 7, 2016 - link

    "New asynchronous compute advances..." ie same as Maxwell 2.0 + preemption... Just ask whether each SM can run graphics & compute tasks concurrently... ;)
  • Le Geek - Saturday, May 7, 2016 - link

    The memory bus width of the 780 needs correction. It was 384-bit wide like all GK 110 cards if I remember correctly.
  • Siddhartha202 - Saturday, May 7, 2016 - link

    Still misses Async as expected.
    In games where Async is properly implemented, Amd equivalent will gain 10-20% atleast and a product only slightly better than 980 Ti like 1070 equivalent could beat 1080.

    I expect games to properly use dx 12 atleast by next year after they completely drop dx 11.

    This war is gonna be neck to neck.. Can't wait to know more about these.
  • Lakku - Saturday, May 7, 2016 - link

    Except it has Async, so not sure where you're getting your information from.

    http://nvidianews.nvidia.com/news/a-quantum-leap-i...
  • steenss - Saturday, May 7, 2016 - link

    What Async does it have exactly?
  • nevcairiel - Saturday, May 7, 2016 - link

    And even if async compute remains subpar, then big games are not going to bother to rely on it, since building something that doesn't work on 70% of its users systems is not economical, not when they could spend the time working on something else.
  • lefty2 - Saturday, May 7, 2016 - link

    For some games async compute doesn't give much benefits, but for those that it does they will definately use it, simply because there is no alternative and they are not going to cripple the game just for the sake of Nvidia
  • nevcairiel - Saturday, May 7, 2016 - link

    Its not "for the sake of NVIDIA", its just a question of effort/time. They could be working on some other improvements that may benefit everyone, instead of working on something that may or may not benefit a minority of the users (based on hardware market share alone).
  • Drumsticks - Saturday, May 7, 2016 - link

    Next gen consoles running Polaris will dwarf the desktop high end discrete market. I suspect that this alone will be enough to get developers in asynchronous compute, especially when every single fps increase matters, and with that experience under their belt, I wouldn't be surprised to see it show up in desktops without too much trouble.
  • D. Lister - Saturday, May 7, 2016 - link

    You mean next-gen consoles would come with graphics hardware better than that of $600 desktop GPUs? Seriously?
  • D. Lister - Saturday, May 7, 2016 - link

    Never mind, my mistake, I misread. I thought you meant in terms of performance.
  • Azix - Saturday, May 7, 2016 - link

    its actually not limited to next gen console. the common wisdom has been that CURRENT consoles already use async and have huge gains from it. It would not be a matter of adding it to the game since these console ports would already have it. The question is optimizing for nvidia by removing it or changing how it works. AMD did something smart winning the console market because thats where most of the demanding games come from. They end up gaining on PC by default.
  • nevcairiel - Sunday, May 8, 2016 - link

    If a game is demanding (in terms of performance) on a PC with a Pascal GPU, its definitely not coming from a console with a super weak APU. :p

    Consoles may use Async Compute, but their GPUs are so incredibly slow compared to a Pascal 1080 that they can't even sustain 1080p@60 in modern games, either degrading to 30Hz or 720p, or possibly both. So let them have their Async Compute to gain 10-20% performance (if at all).

    NVIDIA has obviously planned to support Async Compute through drivers at least, if hardware support just doesn't fit their design for now, and there is no telling how the performance impact of that will be. Right now these driver features are not available on any GPU, so we can't really judge other than wild speculations.

    I find it ludicrous to complain about the absence of such a low level feature. Just judge the final performance of games you care about, everything else... who cares how they get there?
  • Michael Bay - Saturday, May 7, 2016 - link

    It they even happen at all.
  • tamalero - Monday, May 9, 2016 - link

    how so? there are companies that got paid by Nvidia under the banner of "the way its meant to be played" to cripple AMD. (see physX)
  • eddman - Tuesday, May 10, 2016 - link

    Yes, and JFK was assassinated by aliens. /s

    I thought I wouldn't see such comments on anandtech.
  • zepi - Saturday, May 7, 2016 - link

    Consoles support it and it is probably being used in them due to slower cpu's and unified memory making it quite convenient for certain tasks.

    In pc's amd hw supports it, but lacking proper unified memory, it is less convenient to offload calculations to gpu for latency sensitive use cases and subpar nvidia implementation makes it less beneficial anyway.

    I'm doubtful for the actual benefits aside of some individual titles.
  • Kjella - Saturday, May 7, 2016 - link

    Looks nice, but will wait for big pascal, HBM2 and closer to 300w in one card. So much unreleased potential still for a monster card, probably for a monster price too but it'd last a long time I think.
  • soldier45 - Saturday, May 7, 2016 - link

    You'll be waiting at least another year then while the rest of us pass you by with better performing games.
  • jasonelmore - Saturday, May 7, 2016 - link

    Your crazy if you think the Ti is gonna be a year away. Let me remind you GP100 has been in production longer then GP104 and the dies with small defects are already being binned for the Ti.

    6 months at the most, and we'll see a new titan.
  • darkfalz - Sunday, May 8, 2016 - link

    I hope so. In 980 series the Titan came long before the Ti.
  • jasonelmore - Sunday, May 8, 2016 - link

    because the titan was the first GP200 part.. The T100 is the first GP100 part which is already in production. Nvidia made the Big die at the same time as the smaller mainstream die this time around.
  • jasonelmore - Sunday, May 8, 2016 - link

    they did this for insurance on Polaris. If the soon shipping polaris somehow beats the 1080, then they have the Ti ready to launch soon.
  • inighthawki - Sunday, May 8, 2016 - link

    I doubt Polaris will be competition for the 1080. AMD themselves said they were not planning to target the high end market with Polaris.
  • Murloc - Saturday, May 7, 2016 - link

    what's the point in waiting and then keeping a monster card with high power consumption for years, to the point where it becomes obsolete?

    Why not buy 1080 or 1070 now and change it within 1-2 years?
  • Namisecond - Sunday, May 8, 2016 - link

    Some people believe in this thing called "future proofness." I personally find it to be a money making marketing term to sucker hardware fans into buying top-tier gear.
  • willis936 - Sunday, May 8, 2016 - link

    Your thinking was correct five years ago but take a look at the YOY gains in processors. Intel even admitted that node shrinks would be spaced out at an increasing rate from now on. If node shrinks go from 2 to 3 years then 3 to 4 years there becomes less and less reason to upgrade every half decade. Desktops may even have components fail before they're upgraded.
  • Kjella - Sunday, May 8, 2016 - link

    It's one thing to try future proofing against unknowns in DirectX 13, 14, 15 etc. but in this case we know nVidia can do more because they have already done it. If the 1080 was pushing the reticle limit or atx power limit that'd be different. That said it better be more like the ti, less like the titan.
  • TheinsanegamerN - Thursday, May 12, 2016 - link

    future proofness works just fine. My GPU is 4 years old now, and is just starting to feel a little slow. in one game.

    Unless you switch to a higher rez, over-provisioning on your GPU needs can keep you satisfied for years to come.
  • psychobriggsy - Saturday, May 7, 2016 - link

    The most striking aspect of this is the overclocking capacity to possibly over 2GHz.

    The Founders Edition is both a good idea (hey, we have working engineering sample dies of both GP104 and GDDR5X, let's make a limited run to claim a launch, AND charge more for them!) and annoying (obfuscates the true launch time, which is likely two months away given full production time cycles at the fab, GDDR5X availability, etc).

    Still, at least they've come out and stated their case, and they've done it well, and I think that the 1080 is likely out of reach for AMD's Polaris 10 (maybe they could do a Founders Edition of Vega 10, lol).
  • Pinko - Saturday, May 7, 2016 - link

    Anyone knows if DP ports are 1.3 ?
  • Eden-K121D - Saturday, May 7, 2016 - link

    they are 1.4 interesting
  • efi - Saturday, May 7, 2016 - link

    DP has always been 2.1 afaik.
  • LemmingOverlord - Saturday, May 7, 2016 - link

    Just a few remarks...

    Please don't say that the new SLI link provides "memory bandwidth". That's nonsense. It's at best 'data throughout'. We know what you mean but considering the continuing limitations of not sharing framebuffer across SLI, startng that it provides memory bandwidth is incorrect. Also, I find that the rigid bridge added a bit of stability to paired cards... Less wobbly and all. If course if you have a massively high-end $500+ mobo you'll fork out for the non-standard PCIe slot spacing.

    I'll eagerly wait for the benchmark brigade (now that's a good name for a review site ;)). Despite its 2bn extra trannies, Pascal seems to bet on specific architrctural improvements based on trends (eg. VR) rather than providing substantial performance improvements. You'd expect the extra silicon, very high clocks and smaller process (with all its advantages) to give a massive lead over the previous gen... Instead they are bedazzling people with techtalk and sleight o'hand. On the other hand, they can be playing it coy for a future 'Ti'.

    Sit tight, that's what I'll do
  • CaedenV - Saturday, May 7, 2016 - link

    So... does this seem rushed to anyone else? Not saying we are not due for new cards, or that I am questioning the performance improvements. But the lack of practice/polish of the presentation; the allusion that only the 'founders edition' will be available at launch; the likely planted sycophant in the front row constantly shouting 'I can't believe it' 'I can afford that' etc. etc.; Plus there was poor Tom's amateur hour. Then there is the fact that we have a paper announcement, and literally none of the sites have product reviews up today.

    All of this makes me think that the product isn't really ready yet (thus no review units), or they were not planning on making the announcement yet and reviewers are still under NDA for a while. Either way is really odd. Why push the announcement up? Are they expecting AMD to announce something soon (even if they did I find it hard to believe it could compete with this). The whole thing smells fishy.
  • zodiacfml - Saturday, May 7, 2016 - link

    Right, I noticed it too. They are probably not far ahead from AMD when it comes to volume. Both are pumping supplies right now. I'm just not sure why they have to announce it far ahead of time.
  • willis936 - Saturday, May 7, 2016 - link

    That lady in the front row was not leaving the presentation any help. If she was planted Jen probably has the people who paid her on the way out of the building carrying their stuff as we speak.
  • nevcairiel - Saturday, May 7, 2016 - link

    Announcements and reviews don't necessarily go hand-in-hand. Its still 3 weeks until market availability, they may want to space out the events a bit and therefor delay the NDA until closer to the launch.
  • TristanSDX - Saturday, May 7, 2016 - link

    For 600$ GF 1080 will be 15% than 980TI (or 5% faster than Titan X). Regular GF 1070 will end with perf of 980, for some 100$ less, better but still not great. Overall disappoint and small progress with FPS/$.
    Technically Pascal is some mistake and crap. 7.2 bln transisitors with 2.1 GHz, should provide much higher performance than 15% more perf over Titan X or 980Ti.
    Despite all hype, Pascal look like one of worstt 'new gen' released by NV.
  • nevcairiel - Saturday, May 7, 2016 - link

    I think you didn't interpret the numbers properly. The base clocked variant is ~20% faster than a 980 Ti, thats at 1.7ghz boost. A OC to 2.1ghz would give it another 20% on top of that.
  • soldier45 - Saturday, May 7, 2016 - link

    Not everyone has a Titan X or 980 Ti. So it will be def worth it for those of us like me still using 2 and 3 year old cards such as the 780.
  • just4U - Monday, May 9, 2016 - link

    Depends on resolutions really.. a 780 has a lot of life left in it if your gaming at 1080
  • euskalzabe - Saturday, May 7, 2016 - link

    I'm surprised to agree with your statement, but I'm also disappointed with this. Yest, it's a good hardware advancement, but there's nothing exciting other than more performance, more speed and sadly, more price. I've always been a Geforce consumer (2>4>6600>8600>9600 GT>260>470>770) and for the past few years I've felt that $330 is already stretching it a bit for the performance we get in the x70 cards. Now they want to bump that to $380? Yeah, no... no matter the "extreme" performance, it does not seem like a good deal to me.

    I'm suddenly, and for the first time, much more interested in what AMD has to say about polaris. A better $/perf ratio may be the better buy this round, and I'm genuinely intrigued about their talk about HDR pixel processing (and hopefully HDR monitors?). I'm still a 1080p gamer and will remain so for several years, so AMD may be the better option now.
  • Dug - Saturday, May 7, 2016 - link

    You didn't look at everything it can do, did you?
  • euskalzabe - Saturday, May 7, 2016 - link

    Very much did. The only interesting new technology is the multi-projection. If that worked immediately with all titles, I'd buy it in a heartbeat. This not being the case... all I'm saying is wait and see before forking nearly $400, and specially without knowing/seeing what AMD has to offer with Polaris. Rushed purchases are never smart.
  • AnnonymousCoward - Saturday, May 7, 2016 - link

    Yup. Stated another way, the 1070 gives what's already been available (980) and the $600 1080 is 25% faster. Screw that, I'll wait for a faster card.
  • jasonelmore - Saturday, May 7, 2016 - link

    a 1080 is faster than two 980's in SLI. Dunno where you get 25% from that. Hell it's 25% faster than a Ti which is a $700 card
  • just4U - Monday, May 9, 2016 - link

    Ill believe that when I see it Jason. I highly doubt it's going to be 2x 980 performance...
  • AEdouard - Thursday, May 12, 2016 - link

    Two 98s0 in SLI isn't the equivalent of the power of one 980 + 1 other 980. Sometimes it can be close, but SLI is no 2x faster.
  • AEdouard - Thursday, May 12, 2016 - link

    What? According to their presentation, the 1070 is on par with the Titan X, meaning significantly faster than the 980, and the 1080 is significantly faster than the Titan X.
  • digiguy - Saturday, May 7, 2016 - link

    It's funny that the first single card that has enough power to run AAA games in 4k smoothly (if it's true that's better than 2 980s in SLI) is named 1080.... Joking aside, this can be pretty interesting for external enclosures that do not allow SLI. Looking forward to see how it performs with something like the Razer Core in 4k over Thunderbolt 3, for instance with an laptop quad core i7 like the Dell XPS 15. If the quad core and TB3 do not bottleneck it significantly this card (or a future 1080ti), it will be my next purchase.
  • vladx - Saturday, May 7, 2016 - link

    "If the quad core and TB3 do not bottleneck it significantly this card (or a future 1080ti), it will be my next purchase"

    The 1080 will be BW starved like hell with a TB3 connection.
  • osxandwindows - Sunday, May 8, 2016 - link

    Nvidia graphics perform pretty well under PCIE x4, actually.
  • digiguy - Saturday, June 4, 2016 - link

    yes, the first tests show only a 10-15% loss of performance with a GTX 1080 in Razer Core, compared to a desktop with the same card
  • soldier45 - Saturday, May 7, 2016 - link

    So this should be basically 4-5x the power of my 780 classified when compared to the 980ti/Titan X. Also why do they keep comparing this to the 980 and not the 980 Ti. The 980 is 2 years old now why not compare it to last years models..
  • guidryp - Saturday, May 7, 2016 - link

    Because it looks better to when they compare to an Old model.

    The real question is why is AnandTech going along with that? It's not their job to make NVidia look better.

    1080 is priced like 980Ti, and most of don't give a hoot about naming conventions. We care about peformance/$.

    Since this is priced like a 980ti then in should be compared to a 980ti.
  • nevcairiel - Sunday, May 8, 2016 - link

    They compared it to a Titan X, which should be slightly faster than a 980Ti still (or at least rather similar), so you can take that number.
  • The_Assimilator - Sunday, May 8, 2016 - link

    "Since this is priced like a 980ti then in should be compared to a 980ti."

    I'd like to introduce to you this concept called "capitalism".
  • Yojimbo - Saturday, May 7, 2016 - link

    Because it's not meant to be a successor to the 980Ti, just as the 980Ti was not meant to be the successor of the 980
  • Jumangi - Sunday, May 8, 2016 - link

    The price makes it the successor regardless of the name used.
  • The_Assimilator - Sunday, May 8, 2016 - link

    No, no it doesn't.
  • Kutark - Wednesday, May 11, 2016 - link

    No, no it doesn't.
  • sor - Saturday, May 7, 2016 - link

    There will inevitably be a 1080Ti
  • RussianSensation - Saturday, May 7, 2016 - link

    Your math is way off.

    Add 25% over Titan X and divide by your 780.
    http://www.techpowerup.com/reviews/ASUS/GTX_950/23...
  • medi03 - Saturday, May 7, 2016 - link

    25% over TX at what? (7 billion chip vs 12 billion chip, btw)
  • medi03 - Saturday, May 7, 2016 - link

    PS
    Even if so, why would they sell chip faster than TX, for less than TX?
    Doesn't make sense.
  • Gigaplex - Saturday, May 7, 2016 - link

    Because it's significantly cheaper to manufacture, so there's a larger profit margin. A larger profit margin at higher volumes is definitely preferable.
  • The_Assimilator - Sunday, May 8, 2016 - link

    "The 980 is 2 years old now why not compare it to last years models.."

    I dunno, maybe because 980 and 1080 share the same place in the product stack? Crazy idea, I know.
  • Kutark - Wednesday, May 11, 2016 - link

    They compare it because it's the direct successor. It's not a 1080 Ti, its a 1080. They will later release a 1080 Ti which will be the direct comparison to the 980Ti.

    That's like saying that Ford should compare the 2016 Mustang with a V6 to the previous generation Mustang GT with a V8. It's not a direct comparison.
  • AEdouard - Thursday, May 12, 2016 - link

    Because the 1080 is the ''new'' version of the 980. They'll surely release the successor to the 980ti or Titan X, which will be significantly faster (and pricier) than the 1080.
  • azrael- - Saturday, May 7, 2016 - link

    I wonder how much actually usable memory the GTX 1070 will have this time around...
  • Kutark - Wednesday, May 11, 2016 - link

    Yeah, I doubt they'll be making that mistake again.
  • zeeBomb - Saturday, May 7, 2016 - link

    All aboard the hypeee
  • willis936 - Saturday, May 7, 2016 - link

    When you guys do your GPU reviews this year can you please keep the HBM2 vs. GDDR5X discussion in mind? I would really like to see what sort of loads cause the memory bandwidth to be the bottleneck on this year's cards.
  • Kutark - Wednesday, May 11, 2016 - link

    I can already tell you it's not going to make any statistically significant difference. Memory bandwidth hasn't been a bottleneck for quite a while. You can see this on people who do memory overclocks of 15-25% on their video cards and see 1-3% increase in FPS.

    The reason they are using HBM2 on the professional cards is because the type of computations they generally are used for are very heavily dependent on memory bandwidth. Whereas games are a totally different ballgame.
  • djayjp - Saturday, May 7, 2016 - link

    20-25% faster...? Wake me up when the real one launches....
  • djayjp - Saturday, May 7, 2016 - link

    That's what a double node shrink gets you apparently lol scammers
  • sudz - Saturday, May 7, 2016 - link

    Don't worry guys, the 1070 will only have 7GB of useful ram.
  • darkfalz - Sunday, May 8, 2016 - link

    Well, that leaves 1GB for framebuffer where speed is not important. I'm pretty sure this is how NVIDIA "optimised" the 970.
  • Donkey2008 - Saturday, May 7, 2016 - link

    Remember when a new card was released with a press announcement? It makes me laugh that everyone accused Steve Jobs of being a showboat car salesman with his pretentious product release events, but now several companies do the same thing with cornball events like a new computer component is going to revolutionize the world. Nice Fonzie jacket Huang. Ehhhhhhhh (thumbs up).

    I fully expect an exclusive media event with live webcast when G Skill releases higher clocked DDR4.
  • euler007 - Sunday, May 8, 2016 - link

    It's called a sales presentation. Job's didn't invent that. 50 years ago this would have been in front of the biggest client, today they can Stan it on the web for everyone.
  • halcyon - Saturday, May 7, 2016 - link

    Not actually DisplayPort 1.4 certified.

    Comparred OC "Founder's edition" 2.1 Ghz to GTX 980. Misleading or what?

    3x speed of GTX 980? Really. Talk about cherry picked examples.

    Let's wait for actual game benchmarks. They'll be 50% better. Not 100%, definitely not 200%.
  • nevcairiel - Sunday, May 8, 2016 - link

    The only time the OC even came up was in the VR demo, the comparisons all use stock clocked GPUs.
  • Meteor2 - Saturday, May 7, 2016 - link

    This announcement -- with shipping hardware coming in the same month -- rather rains on AMD's parade. From Anandtech's article covering the announcement of Polaris, 'the small Polaris GPU in question could offer performance comparable to GTX 950'. Given that the 1080 is ~4.5x the performance of the 950, according to the chart in the presentation, AMD are realising the efficiency gains of 16 nm but aren't getting the same performance.
  • kedesh83 - Saturday, May 7, 2016 - link

    I believe that the GTX 780's memory bus width is 312 right? not 256..
  • TheinsanegamerN - Thursday, May 12, 2016 - link

    384 bit.
  • medi03 - Saturday, May 7, 2016 - link

    7 billion transistor chip (1080) will be "20-25 faster" than 12 billion transistor chip (TX), really?
  • dragonsqrrl - Saturday, May 7, 2016 - link

    GM200 has 8B transistors
  • Gigaplex - Saturday, May 7, 2016 - link

    If those 7B transistors are clocked twice as fast, then sure, why not.
  • kedesh83 - Saturday, May 7, 2016 - link

    Edit: I meant that the 780 had 384bit memory bus, the graph above says 256.
  • anubis44 - Saturday, May 7, 2016 - link

    https://youtube.com/watch?v=aSYBO1BrB1I

    NVIDIA has already lost the gaming wars. This is all pointless.
  • D. Lister - Sunday, May 8, 2016 - link

    I believe you, but only because your reference material appears extremely credible. To hell with Nvidia, they are all but dead to me now.
  • varad - Sunday, May 8, 2016 - link

    While this video raises some valid points [having all consoles in the bag does give AMD good leverage with developers], there are some points that are inconsistent or highlight a lack of deeper understanding:

    The whole argument seems to be built around the "scalability" term mentioned in some AMD slide. Whoever made the video claims this means multiple smaller GPUs on a single die [connected through an interposer]. And this apparently means the volume manufacturing economics will work for AMD while NV will be stuck making larger chips with lower yields. A few thoughts and doubts on this supposed masterplan:

    1. I am going to guess manufacturing with interposer based technology isn't exactly easy or cheap. This is considering that similar technology is used for HBM based GPUs today and those are mostly limited to higher end [$$$] GPUs.

    2. Whoever made the video assumes that AMD would manufacture smaller GPU dies which are then connected using an interposer on a bigger die to make a high performance GPU. While this is surely possible, it also would be wasteful since there are probably multiple video [encoders/decoders] and IO [PCIE, display etc] components that do not necessarily need to be there multiple times. So this would probably require such components to then be put on a separate die. This requires more investment since such a die would then need to be manufactured and tested separately before being integrated with the regular smaller GPU dies.

    3. As far as i understand, consoles have their CPU and graphics integrated on the same die. I do not think this will change because consoles are not known for high margins. So for the proposed masterplan to work, console makers either need to agree to use separate CPU & the multi-GPU dies OR AMD is able to put everything on a big die using an interposer. The former probably would not be a good idea for AMD since that would open the door for Intel + NV to capture the CPU/GPU socket. So that means the only scenario this would work for AMD is if they can connect the CPU, multiple GPUs and as explained in #2, another die for the non-replicated components using an interposer. Again, while this is technically possible, I'm not sure if it will be economically feasible.
  • Gigaplex - Sunday, May 8, 2016 - link

    Is that why NVIDIA holds the majority of the discrete GPU market share and AMD is struggling to avoid bankruptcy?
  • The_Assimilator - Sunday, May 8, 2016 - link

    Yeah, I'm sure you and some random Youtuber are the only ones who have deduced AMD's "master plan". Certainly nVIDIA has no idea what AMD is attempting to do. /s
  • bill44 - Saturday, May 7, 2016 - link

    Is this a 10 bit card? Does it support HDR? What about audio sampling rates, hardware decode for the latest audio & video codecs (10 bit HDR h265), WCG support, HDCP 2.2, HDMI 2.0b?
  • mdriftmeyer - Saturday, May 7, 2016 - link


    Zero mention on Nvidia's site for 10-bit, or HDR, not to mention H.265 Decode/Encode.

    Sorry, but Polaris curb stomps Pascal.

    HDMI 2.0b, DL-DVI, HDCP 2.2 listed.

    1 - 7680x4320 at 60 Hz RGB 8-bit with dual DisplayPort connectors or 7680x4320 at 60 Hz
    YUV420 8-bit with on DisplayPort 1.3 connector.
    2 - DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready.
  • nevcairiel - Sunday, May 8, 2016 - link

    10-bit output has been supported on practically all recent NVIDIA cards, not sure what you are on about. For H265 support, they don't generally list that in the specs, but its extremely likely that it inherits the latest capabilities from the previous generation (ie. what the GTX960 offers), which would include H265 8/10-bit decode, and 8-bit encode.

    HDMI2.0b includes HDR support (its only metadata to inform the display of it), its up to the applications to make use of it.
  • wintermute000 - Sunday, May 8, 2016 - link

    "but its extremely likely"

    For my money I'd want to be completely sure LOL
  • nevcairiel - Sunday, May 8, 2016 - link

    Then wait until someone can confirm this. Like I said they never listed those things in the specs of any previous GPU, so it not being in there now means nothing.
  • mdriftmeyer - Sunday, May 8, 2016 - link

    They claim 10 bit for DirectX, and even XOrg on Linux, but it's a bust.

    https://devtalk.nvidia.com/default/topic/771081/li...
  • Murloc - Sunday, May 8, 2016 - link

    we don't know if it's included but it likely is.

    That's hardly grounds for curb stomping.

    The rest (mostly perf/watt) is all yet to be seen.
  • D. Lister - Sunday, May 8, 2016 - link

    For 10-bit color support, you need a Quadro or Firepro. No point in putting such functionality in a consumer product where screen response time is often far more important than color accuracy.
  • bill44 - Sunday, May 8, 2016 - link

    10 bit is not just for color accuracy, it's essential for HDR. HDR is very important for games and UHD BD playback (which is coming to PCs). Polaris promised 10 bit HDR, HDR monitor will also be available by the end of the year.
    Also, there are many who will buy GTX 1080 for madVR etc., not just for games. So yes, going forward, 10 bit is essential in a consumer card. There are more monitors available now with QD technology, which will help with WCG and P3 gamut and beyond is now part of iMac's, and future HFR display technology (like Dell OLED 120Hz monitor).
    As we're going forward, DP1.3/1.4 over USB Type-C will become available too. Unfortunately, TB3 can only support DP1.2 in it's present form.
  • nevcairiel - Sunday, May 8, 2016 - link

    Since you already named madVR, 10-bit already works using any current NVIDIA card, so thats not going to change.
  • bill44 - Sunday, May 8, 2016 - link

    I mean proper systemwide driver that works with windowed programs and major apps lfrom Adobe. I know the limitations as far as Windows go (MS need to sort out Color Manegement once & for all, preferably using systemwide 3D LUTs), Mac's are better in this regard, however, everything starts with proper hardware and driver support.
  • nevcairiel - Sunday, May 8, 2016 - link

    Honestly the only people that needs this in Adobe apps are content producers, and they might as well buy a Quadro/FirePro.

    Its otherwise just a Microsoft limitation, they could easily drive the compositor in 10-bits and allow apps to use that.
  • bill44 - Sunday, May 8, 2016 - link

    Thanks. What's the difference between HDMI 2.0a and (here) 2.0b? Some sites say level 'b' is only 10.8Gb/s, others disagree. Does it matter? Or should we just wait for 2.1 with dynamic metadata support?
  • nevcairiel - Monday, May 9, 2016 - link

    There is a bunch of confusion going around about that, since the naming is slightly confusing.

    HDMI 2.0 had two levels, Level A and Level B. Level B is what all older NVIDIA GPUs offered, HDMI 1.4 speed and some HDMI 2.0 features, ie. 4k@60 using 4:2:0 chroma to get the bandwidth down.

    HDMI 2.0a and HDMI 2.0b are not related to those two levels, but are improvements on the HDMI 2.0 standard. 2.0a adds (static) HDR support, but I'm not certain what 2.0b does, it doesn't seem documented much yet, perhaps not officially released yet.

    Unfortunately some people used the 2.0a/2.0b naming for the two tier levels in the past, causing such confusion.

    Admittedly it is possible that NVIDIA would also use the confusing naming, but considering they also claim DP 1.3 support at the same time, and previous NVIDIA GPUs had 18gbps HDMI chips already, it would seem rather odd to take a step back.
  • bill44 - Monday, May 9, 2016 - link

    HDMI 2.0b has been released:
    http://www.hdmi.org/manufacturer/hdmi_2_0/
    I can't see a difference between 'b' vs 'a'.
    Confusing:
    https://disqus.com/home/discussion/trustedreviewsl...
    HDMI 2.1 has not been released:
    http://www.flatpanelshd.com/news.php?subaction=sho...
    https://en.wikipedia.org/wiki/HDMI#Version_2.0
    it seems to add 'Dynamic metadata' only.

    As I said, very confusing. Also DP 1.3 could use Type-C connector (alt mode), and may come to TB3.
    http://www.vesa.org/faqs/#DisplayPort 1.3 FAQs
    Very confusing, as there are monitors (or coming soon) that use Type-C connectors. As such, we need DP to Type-C cables. Would it not have made sense to introduce Type-C on this new gen GTX1080?
  • nevcairiel - Sunday, May 8, 2016 - link

    This is not true. You only need a Quadro for 10-bit OpenGL support. Direct3D11 10-bit works on any recent consumer GPU over DisplayPort or HDMI, both from NVIDIA and AMD.

    It has a few limitations, like it only works when you use actual fullscreen mode and none of the borderless windowed modes, but in general a game could use it if it wanted to. Some video players use it for 10-bit videos.
  • Eden-K121D - Sunday, May 8, 2016 - link

    are you hallucinating.
  • mitr - Saturday, May 7, 2016 - link

    Any word on the double precision performance? In other words is this a reasonable compute card for poor people :)
  • willis936 - Saturday, May 7, 2016 - link

    I'd wager it'll be 1/32 FP32. Also coupled with the lackluster memory bandwidth I wouldn't think this is a very beefy compute card's. Disappointing for me since I wanted to upgrade and every GPU I buy ends up as a compute card eventually.
  • mitr - Sunday, May 8, 2016 - link

    That's disappointing. Nvidia is trying to herd the compute market towards expensive dedicated compute cards. This is a clear departure from the prior policy of nurturing the compute market by providing cheap, albeit handicapped high end graphics cards like GTX Titan.
  • nevcairiel - Sunday, May 8, 2016 - link

    There may be a new titan at a later point, we don't know that yet. But the 1080 is clearly a consumer GPU, which never really had the compute performance.
  • dragonsqrrl - Monday, May 9, 2016 - link

    This is GP104. Double precision has never been a priority with x04 GPUs. There will likely be a 'cheap' GeForce card based on GP100 at some point in the next year. And the GPU compute market has been dominated by "expensive dedicated Compute cards" since it began, the Titan's didn't change this. The Titan's simply filled a research/developer niche in the desktop market, among other things.

    Just out of curiosity, the context of your comments have been strictly double precision, so what are you referring to when you say the Titan is 'handicapped'?
  • Kutark - Thursday, May 12, 2016 - link

    To be honest the 7xx series were the black sheep in that respect. It never has been normal for consumer grade cards to have the hardware to do decent double precision. It was a calculated risk that NVidia took that honestly just didn't pan out to good sales.
  • TheinsanegamerN - Thursday, May 12, 2016 - link

    What about fermi?
  • dragonsqrrl - Thursday, May 12, 2016 - link

    It's been more normal than not since double precision became a priority with Fermi. The exception has been Maxwell, and I suspect that had a lot to do with transistor/die area constraints imposed by 28nm. If Nvidia had the die area to incorporate a healthy FP64 core ratio while maintaining acceptable single precision performance scaling over GK110 and GM204, GM200 probably would've looked a lot different.
  • Patapon456 - Saturday, May 7, 2016 - link

    I determined a educated guess on the TMU/ROP count that seems very close to the 9 Teraflops of performance that is said. 180/90 for the GTX 1080. You can verify the math is very close to 9 Teraflops.
  • P39Airacobra - Sunday, May 8, 2016 - link

    Well! Looks like my new 970 is now junk lol.
  • Namisecond - Sunday, May 8, 2016 - link

    Really now? If you don't want it, give it to me. I'll be more than happy to take it off your hands after you replace it lol
  • just4U - Monday, May 9, 2016 - link

    not to mention my brand spanking new MSI 390.. or the 960 it replaced. Ah well..
  • darkfalz - Sunday, May 8, 2016 - link

    As owner of a decently overclocked 980 I think I'll wait for the 1080 Ti.
  • HollyDOL - Monday, May 9, 2016 - link

    As an owner of decently power hungry and aging GTX-580 looking forward benchmarks to see if The Time has come :-)
  • idris - Monday, May 9, 2016 - link

    As an owner of an 8800GTX purchased in '07 (oh my..), I think I'm going to wait for VEGA at the end of the year... But more importantly, hoping to view some REAL benchmarks on the initial offerings on the new process nodes (14nm GloFo/Sammie for AMD, 16nm TSMC for NV) - AND NOT cherry picked CrapWorks games in the benchmarks using binned cards from NV but the same retail cards for consumers! NB. AT, take note - hoping your reviews are also from retail cards (when they're eventually released) & not binned cards direct from NV.
  • HollyDOL - Monday, May 9, 2016 - link

    Used to have 8800GT myself, was fine until one day it decided to give up on me :-)

    After terrible dev experiences with ATI (back then) I somehow can't make myself to look on Radeon brand as a potential replacement. Even though logically AMD cards today have only that 'Radeon' name in common with the old ones...
  • just4U - Wednesday, May 11, 2016 - link

    Since we are speaking of the 8800GTS.. I owned them all including the 3850..70..4850 70 .. 90
    Nvidia's 8800 98000 9800512...

    They were all good cards and the drivers on both sides were decent to. Yes.. some quirks here and there but both companies had that.
  • Kutark - Thursday, May 12, 2016 - link

    Oh The Time™ has definitely come.

    Even a 1070 is going to be a light years improvement over a 580.
  • BrokenCrayons - Monday, May 9, 2016 - link

    It's an interesting announcement, but in some ways rather disappointing too. The 1080 is suffering from wattage creep as it now requires more power than the card it was meant to replace. Yes, there's more performance too, but I was hoping that the focus would be on getting power consumption and heat output under control with this generation as opposed to greater performance which is probably being driven by the soon-to-flop VR fad. This is yet another GPU in a long like of Hair Dryer FXs that started with the 5800 and that silly fairy girl marketing run.

    It'll unfortunately probably be at least a year until the GT 730 in my headless Steam streaming box gets a replacement unless AMD can come through with a 16nm card that meets my requirement. I'm in no hurry to replace my desktop GPU, but when I do, I would much prefer a sub-30 watt half-height card over one of these silly showboat toys.
  • jzkarap - Monday, May 9, 2016 - link

    "soon-to-flop VR fad" will be tasty crow in a few years.
  • dragonsqrrl - Monday, May 9, 2016 - link

    I'm curious about your perception of 'wattage creep'. The only way I could see this having any validity is if your only point of comparison is the 980. 180W TDP is not at all unusual for a x04 GPU. The 560Ti had a 170W TDP, and the 680 195W. The point of improved efficiency isn't necessarily to reduce power consumption at a given GPU tier, it's to improve performance at a given TDP. That's how generational performance scaling works in modern GPU architectures. So I'm not sure what you were expecting since the 1080 achieves about 2x the perf/W of the 980Ti. And your comparison to the 5800 falls so far outside the realm of objectivity and informed reality that I'm not even sure how to address it.

    In any case the 1080 clearly isn't targeted at you or your typical workload, so why do you care? In fact its most relevant attribute is its significantly improved efficiency, which you should be excited about given the implications for low-end cards based on Pascal. Are you suggesting that merely the existence of this type of card somehow poses a threat to you or your graphics needs?
  • AnnonymousCoward - Monday, May 9, 2016 - link

    Very well said. And this guy wants <30W! He might as well stick to iGPU.
  • BrokenCrayons - Tuesday, May 10, 2016 - link

    With regards to wattage creep, I only compared the new GPU to the current generation card which NV is using as a basis of comparison in its presentation materials. Prior generations aren't really a concern, but if they are taken into account the situation becomes progressively more unforgivable as we iterate backwards toward the Riva 128 as power requirements are significantly lower despite the less efficient design of earlier graphics cards.

    I also don't think it's fair to take into account a claim of 2x the performance per watt until after the 1080 and 1070 benchmarks are released that give us a better idea of the actual performance versus the marketing claims as those so rarely align. However, if I were to entertain that claim as accurate, I'd find the situation even more abhorrent since NV failed to take the time to use the gains in efficiency to do away with external power connectors and dual slot cooling, both of which go against the grain of the progressively more mobile and power efficient world of computing in which we're living now. It's as if they decided the best solution to the problem of graphics was "make it bigger and more powerful!" That sort of approach doesn't actually impress people who aren't resolution junkies, but it's NV's business to produce niche cards for a shrinking market in order to obtain the halo effect of claiming the elusive performance crown so people buy their lower tier offerings as a consequence of the competition between graphics hardware unrelated to their current purchase.

    I do suggest and maintain that the existence of high wattage, large graphics cards continues to threaten the concept of efficiency. There's only so far down that any given design can scale efficiently. By failing to focus exclusively on efficient designs, NV is leaving unrealized performance on the table. Once again, that's their concern and not mine. However, I would prefer they announce and release their low end cards first, bringing them to the forefront of the media hype they're attempting to build because those parts are the ones that people ultimately end up purchasing in large numbers as indicated by statistically significant collection mechanisms like the Steam survey.
  • BiggieShady - Wednesday, May 11, 2016 - link

    Also existence of croissants is threatening the concept of muffins, and existence of space shuttle is threatening the concept of a bicycle.
  • dragonsqrrl - Thursday, May 12, 2016 - link

    The problem is modern performance scaling, particularly with GPUs, works differently than it did 1 to 2 decades ago. Now efficiency is essentially the same as performance because we've hit the TDP ceiling for every computing form factor including desktop/server (around 250-300W single GPU). As you mentioned this wasn't the case back when Riva 128 launched. The difference is chip makers can no longer rely on increasing TDP to help scale performance from one gen to the next like they could back then. For Nvidia this shift gained a lot of momentum with Kepler. So while not everything going back to the beginnings of desktop GPUs is relevant to modern performance scaling, I would argue that everything since Fermi is. This is further evidenced by the relative consistency in die sizes, naming convention, and TDPs for every GPU lineup since Fermi. The main difference from one gen to the next is performance. Fortunately it's pretty clear that your definition of 'wattage creep' died with Fermi. It's no more relevant to modern performance scaling than your reference to Riva 128 as proof of continued wattage creep.

    I just find it difficult to believe that any informed person familiar with the trends and progression of the industry over the past two decades would now expect Nvidia to limit TDPs to 75W, and then feel threatened when they, in overwhelmingly predictable fashion, didn't do that. I mean, what's the basis for this expectation? When have they ever launched an ultra low-end card first, or imposed such ridiculous TDP constraints on themselves relative to the norms of the time? Why does the existence of 'high' TDP cards "threaten the concept of efficiency" when TDP has no bearing over efficiency?

    And it's strange that you mention steam survey in defense of your position, when the 730 isn't nearly as popular as cards like the 970, or 960. There's definitely a sweet spot, but the 730 (and other cards like it) fall far below that threshold.
  • BrokenCrayons - Friday, May 13, 2016 - link

    I guess I'll try once more since you don't seem to understand what I'm talking about with respect to wattage creep. I've cut out the non-relevant parts of the discussion so it'll make more sense. This pretty much encapsulates what I meant: "With regards to wattage creep, I only compared the new GPU to the current generation card which NV is using as a basis of comparison in its presentation materials." I probably should have left out the historic details since you're getting awfully hung up on the Riva 128 and don't seem to acknowledge there were a few other graphics cards that were produced between it and the Fermi generation. I'm guessing that generation is probably when you became more familiar with the technology so it might make sense that the artificial cut-off in acknowledging graphics card power requirements would begin there. So, for the sake of your own comprehension, the wattage creep thing that you're stuck on is the increase between the 980 -> 1080 and nothing more. We probably shouldn't look at anything earlier than the past couple of years since that appears to be really confusing.

    To address your second point of confusion, my desire is to see the release of lower end graphics cards, starting from the bottom and working upward to the top. I'm not surprised by the approach NV has taken, but that doesn't stop me from wishing the world were different and that the people in it were less easily taken in by business marketing. I realize that very few people are ultimately endowed with the ability to analyze the bigger picture of the world in which they live without losing sight of the nuances as well, but I'm eternally optimistic that a few people are capable of doing so and just need the right sort of nudge to get there.

    In fact, your third discussion point, the Steam Survey, is a pretty good example of missing fact that a forest exists because of all those trees that are getting in the way of seeing it. The 730's percentages aren't notable in relationship to the 9xx cards. In fact, what's more noteworthy is the percentage of Intel graphics cards. Intel's percentage alone ought to make it obvious that the bottom rungs of the GPU performance ladder are vitally important. Combining those with the lower end of the NV and AMD product stacks and it paints that forest picture I was just talking about wherein low end graphics adapters are an unquestionably dominant force. Beyond the Steam survey are the sales numbers by cost categories that have demonstrated for years that lower end computers with lower end hardware sell in large numbers. Though I'm probably point out too many individual trees at this point, I'll also throw in the mobile devices out there (smartphones and tablets) demonstrates how huge the entertainment market is on power efficient hardware when the number of copies of casual games by numbers demonstrates the dominance of gaming on comparatively weak graphics hardware.
  • dragonsqrrl - Friday, May 13, 2016 - link

    Argument creep?

    "However, if I were to entertain that claim as accurate, I'd find the situation even more abhorrent since NV failed to take the time to use the gains in efficiency to do away with external power connectors and dual slot cooling, both of which go against the grain of the progressively more mobile and power efficient world of computing in which we're living now."

    Your own words. So what you're trying to say now is that I've misunderstood. You weren't trying to say that it's abhorrent of Nvidia to not have killed off external power connectors this generation, you simply wished Nvidia would focus more on low-end cards. I love how you're now trying to paint yourself as some sort of enlightened open minded intellectual, but unfortunately your previous 2 comments aren't going anywhere.

    "I probably should have left out the historic details since you're getting awfully hung up on the Riva 128 and don't seem to acknowledge there were a few other graphics cards that were produced between it and the Fermi generation."

    ... that's exactly what I acknowledged. In fact failing to acknowledge the rationale for the inflation of TDP between the two was exactly the part of your previous comment I tried to address. I tried to explain that the 'wattage creep' you're thinking about, in referencing the inflation since Riva 128, and that lead to a card like the 5800, is very different from the increase in TDP between the 980 and 1080. The difference now is efficiency is driving the performance scaling.

    "So, for the sake of your own comprehension, the wattage creep thing that you're stuck on is the increase between the 980 -> 1080 and nothing more."

    ... so I think it's misinformed and misleading to simply refer to that as 'wattage creep' while referencing the 5800 and Riva as examples.

    "I realize that very few people are ultimately endowed with the ability to analyze the bigger picture of the world in which they live without losing sight of the nuances as well, but I'm eternally optimistic that a few people are capable of doing so and just need the right sort of nudge to get there."

    Isn't this criticism just a little hypocritical given your hard line position? You point out the inability of others to "analyze the bigger picture", but at the same time you can't seem to fathom the value of higher TDP cards, not just for gamers, but for researchers, developers, and content creators. It's much more than just the objectively unnuaunced picture you're portraying of ignorant "resolution junkies" being taken in by marketing and "silly showboat toys". Again, you say that higher TDP cards "threaten the concept of efficiency", which is ironic since the 1080 is the most efficient card ever, and TDP has nothing to do with efficiency.

    "The 730's percentages aren't notable in relationship to the 9xx cards. In fact, what's more noteworthy is the percentage of Intel graphics cards."

    How is that noteworthy in the context of your argument? You discussed the popularity of ultra low-end discrete cards to help bolster your position, did you not? How do iGPUs that come with most Intel processors by default support that?

    "Combining those with the lower end of the NV and AMD product stacks and it paints that forest picture I was just talking about wherein low end graphics adapters are an unquestionably dominant force."

    True, but I doubt ultra low-end, like you've been promoting, would make for quite as compelling an argument. Hopefully your position hasn't migrated too much here either. You did say, "I would much prefer a sub-30 watt half-height card over one of these silly showboat toys". I could also add up cards like the 970, 960, 750Ti, 760, 660, etc. and make a pretty compelling argument for the popularity of higher-end cards.
  • Kutark - Thursday, May 12, 2016 - link

    I honestly don't see that as an issue in this market. AMD buyers have already shown they really don't care about power usage when they will buy cards that had 70-90w higher TDP than the equivalent NVidia offering. I've made lots of arguments that the price/perf savings over an NVidia card was not borne out over the 2-3 year life cycle of a card when you factor in the electricity savings.

    However, a lot of kids and people aren't paying their own electrical bills anyways, so it's kind of 6 of one half a dozen of the other to them.

    Point being, power efficiency is generally not on the list of most people's concerns when buying a discrete graphics card. Price/perf is typically the main metric (with a little bit of fanboyism thrown in for good measure).
  • medi03 - Thursday, May 12, 2016 - link

    Equivalent... what "equivalent"?
    AMD cards consuming extra 70w watts are also roughly 10% faster than similarly priced nVidia cards.
  • beck2050 - Monday, May 9, 2016 - link

    Excited to see your actual reviews!
  • AnnonymousCoward - Monday, May 9, 2016 - link

    Why does NVIDIA always use a leaf blower for the reference fan? Dual exposed fans above a flat heatsink are proven to be much quieter and cooler.
  • Rayb - Monday, May 9, 2016 - link

    Can you say "Vapor Chamber Cooler" and understand its meaning. Obviously, this is a reference board and cooling solution from NVIDIA not a custom design from other OEMs.
  • TheinsanegamerN - Thursday, May 12, 2016 - link

    and are terrible for small cases or multiGPU, as they just cycle heat around. XFX fixed the blower fan for the 390x, we'll see if nvidia fixed the blower cooler this time around.
  • oranos - Monday, May 9, 2016 - link

    going to be the easiest $600 ive ever spent
  • cactusdog - Monday, May 9, 2016 - link

    These cards wont be "$599 and $379 respectively". This is a strange launch, There is a lot of sneaky marketing and PR by Nvidia and confusion. The "Founders Edition" is actually the bog standard reference card. They are $699 not $599, the 1070 wont be $379, but $479 on release. With availability issues, the 1080 will be more like $749-$799 on release. The cheaper price of $599 is for AIB partners, non-reference they can make versions of the 1080 for $599 but most wont be sold at that price. The real prices changes the perception of the cards, its not as attractive, and with the only benchmark showing the 1080 performing slower than a 980Ti, it seems Nvidia's hype and marketing doesnt match reality.
  • santz - Tuesday, May 10, 2016 - link

    i believe you, cant wait AMD to launch their high end Polaris cards as I also believe that only true competition will make the prices reflect what average consumers can spend. All this sneaky stuff being pulled by PR may just backfire .. hmmm time will tell
  • medi03 - Thursday, May 12, 2016 - link

    Well, that might come in Oct the earliest.
    From what I got, what we'll get will be 470 and 480 cards.
    With 480 being roughly on 390x levels (not bad at all for 200$-ish card though).

    It's not even clear if there will be 490 based on Polaris.
    First (smaller) Vega, allegedly, comes in Oct though.
  • nevcairiel - Wednesday, May 11, 2016 - link

    Well "bog standard reference" with a vapor chamber cooler and a full metal body. Its quite a bit more premium than a reference 980 was. We'll have to see what kind of models AIBs produce, its a new strategy to position the reference in the middle of the segment instead of at the bottom.
  • KoolAidMan1 - Thursday, May 12, 2016 - link

    Yeah, it feels like Founder's Edition cards are there to get a premium on what are already going to be very very hard cards to buy at launch
  • tazuk79 - Tuesday, May 10, 2016 - link

    Anyone know what the noise level off a Vapor Chamber Cooler would be like?
  • TheinsanegamerN - Thursday, May 12, 2016 - link

    Look at nvidia's current blowers. Unless they changed the fan, itll be the same.
  • Kareloy - Tuesday, May 10, 2016 - link

    What I find surprising is that the 1080 is basically the same as a titan/titanx but will probably have the price of a titan. Seriously Nvidia? I feel much cosier using my dual Asus r9 fury X, they cost less than a used titan Z, and perform like a titan x. Don't get me wrong, I'm 13 and on a budget, but I know my stuff, and I'm telling you that your much better off staying inside your cheap AMD gpu's and your 4K screens.

    Sadly, I can't say the same about AMD cpu's. But back to the topic, when the 980's price is reduced, forget all about Nvidias french names (Pascal), and stick with our friendly and Familiar Maxwell.

    Pls REPLY to me, I want to know your oppinions, thnx!
  • Kutark - Thursday, May 12, 2016 - link

    I honestly don't know where you are getting the "will probably have the price of a Titan/TitanX". What are you basing that on? If you're talking about the fact that the price will be inflated for the first few weeks after release, that's not a fair comparison. The r9 fury was inflated for a long time after release. Once prices settle into their MSRP you will see the 1080 at $599 and probably aftermarket overclocked models in the $620-650 price range.

    Even if it were identical performance to a Titan X that's a fantastic deal. We don't know the big picture, but the NDA is being released on the 17th, and reviewers have had cards for a while now. So we'll have plenty of performance data to parse through in less than a week.
  • brianlew0827 - Thursday, May 12, 2016 - link

    980/980Ti will soon become history
    together with Fury/X
  • medi03 - Thursday, May 12, 2016 - link

    Only 30% of nVidia users are on Maxwell.
    It's about price, silly.

    Also, wait for real life benchmarks. The fact that Doom demo was run at 1080p isn't particularly expiring. (yet they seem to be proud of it, lol)
  • piiman - Saturday, May 14, 2016 - link

    I thought that was a little strange myself. I used it to officially start the rumor that the 1080 can only do 1080p :)
  • SeanJ76 - Thursday, May 12, 2016 - link

    Looks good!
  • willis936 - Friday, May 13, 2016 - link

    Ryan, if you're allowed to say: have you received review samples yet?
  • piiman - Saturday, May 14, 2016 - link

    I've seen reviewers show they have the card (actually holding it up for the camera) So you can bet the day they left the NDA we will have tons of benchmarks.
  • Ranger1065 - Friday, May 27, 2016 - link

    Nope sorry you absolutely are not allowed to say. Freedom of speech is no longer acceptable at Anandtech. Comments that are critical of the way things are done will be deleted, I know from experience. You just need to accept that after the Zenimax purchase, things have deteriorated rapidly at Anandtech. Bring back the glory days of Mr Anand Lal Shimpi!
  • lashek37 - Saturday, May 14, 2016 - link

    I'll upgraded my computer ,before I spend $700 for another video card,lol,but dame this card is a beast and 😩👍right now I'm happy with what I got at home.....On the other had I could sell my Geforce980T.I. for 300 bucks on Amazon ,with no problem and make up the remainging ,put that cash on this beast .I win👏💵👏😚😂

Log in

Don't have an account? Sign up now