Comments Locked

279 Comments

Back to Article

  • timecop1818 - Friday, February 7, 2020 - link

    Another useless processor from AMD
  • jordanclock - Friday, February 7, 2020 - link

    Care to elaborate on the hyperbolic statement? Or are you reading a review from another dimension where the 3990X doesn't dominate on most benchmarks and is competitive on the rest?
  • Irata - Friday, February 7, 2020 - link

    And now remember that (s)he was probably sitting in front of the PC in front of a browser hitting F5 repeatedly just to be able to post this comment as first before even having read the article.
  • Lhapiye_Kie - Friday, February 7, 2020 - link

    eh, are you a regular here too?
    do you mean that is someone with non K something?
    XD
  • leexgx - Monday, February 24, 2020 - link

    he really messed up this review (and his time) by not updating his OS (CPU was showing as 2 sockets) up to date OS shows as 1 socket

    the 64 thread limit is still there on Pro, Pro for Workstation and enterprise as long as you have 1903/1909 up to date (windows still splits the cpu into 2 kinda like Numa groups, but is only visible via set affinity in details in task manager)
  • yeeeeman - Friday, February 7, 2020 - link

    It is useless for the consumer market, but for the enthusiasts it is a gem.
  • Irata - Friday, February 7, 2020 - link

    That applies to anything high end
  • bill.rookard - Friday, February 7, 2020 - link

    Not sure about being for 'enthusiasts' as it's price is a bit high for that, and that it really is aimed squarely at the workstation market. That being said, the fact that it only (only heh!) supports effectively at this point 256GB of RAM until larger UDIMMS come out do does limit it's appeal to the highest end configurations for VM's and VFX studios. ECC Registered DIMM support is almost mandatory for those.
  • nevcairiel - Saturday, February 8, 2020 - link

    I agree with this, with this high number of cores the memory support is, frankly, not enough. It severly limits the usefulness. But I suppose thats the point, afterall they want to still sell the much more expensive EPYC CPUs.
  • AlexDaum - Sunday, February 9, 2020 - link

    The single socket epyc CPUs aren't even that much more expensive. On Newegg you can get it for 4700$.
  • Logic28 - Monday, May 11, 2020 - link

    Link or it didn't happen.

    8180 which has only 28 cores has a list price on NewEgg right now of $11000
    vs the 4k 3990X Threadripper....

    I don't get this need to push out information that is clearly not truthful. The price of these procs need to eventually fall, right now Intel is living off the upgrade path many studies are dug in on, and so you have IT trying to justify a much worse cpu so they dont' have to do a bunch of work replacing all the machines currently getting their assets kicked by a consumer cpu, again at a fraction of the cost.
  • sharath.naik - Saturday, February 8, 2020 - link

    Agree, for a 64 core processor to be fully utilized you need more ram capacity. But we do have 64gb rams already available which means that you can go up to 512GB today. It is an unnecessary limitation.
  • antus - Sunday, February 9, 2020 - link

    It still has use for scientific workloads. Its up to the user to decide if this many cores in this configuration at this low price works for them.
    Its a pitty this article centered so much on windows limitations. Sure some people might want this many cores in a HEDT configuration but i'd like to see linux benchmarks due to it being a free OS that can handle this cpu properly and run scientific workloads. It likely would have a place in the racks of university where I work.
  • GreenReaper - Sunday, February 9, 2020 - link

    Ultimately this is a Windows shop, you need to look to Phoronix or ServeTheHome (which did both). Takeaway is the same but they do more traditional server workloads. For parallel sever tasks, it's great. Most people will want to use one of the cut-down CPUs and use the savings on for RAM/storage.
  • alysdexia - Monday, May 4, 2020 - link

    It's, whom, I'd, CPU, should
  • kardonn - Tuesday, February 11, 2020 - link

    I run a very high end VFX studio and do simulation work for big features, high end commercials, and big productions for Amazon/Netflix. I assure you, 256GB RAM is way more than I've ever needed and will easily be futureproof enough until larger UDIMMS become available one day to unlock the 512GB potential.

    All of my current workstations are 128GB of RAM and it's very rare for me to work on jobs that even approach that limit. 256GB is tons for 99% of the work people will be throwing a 3990X at.
  • alysdexia - Monday, May 4, 2020 - link

    its, hick
  • Logic28 - Monday, May 11, 2020 - link

    You guys are flat out wrong about the usefulness in vfx. I work in vfx, Blur used this chip to render Dark Fate - Terminator. And no single render is going over 128GB in more renders. You don't treat this like a standard server where you are running 4-8 frames/jobs on one machine like you would with say a 8280 with 56 cores, and enough ram to give each job 128 GB for instance.
    You instead put this on lighting artists desk, or a Houdini Physics sims, or you can use it as a server, but only pushing through 1-2 frames at a time on it.
    But here is the kicker people need to compare this to.
    This proc is literally priced at 1/7th to 1/10th the price of the Xeon, and it destroys it in rendering speed.
    So you can increase lighting artist working speed by like several orders of magnitude.

    And no you cannot find the Xeon for $4700 that is comparable. What are you guys fake bots pushing intel prop? Seriously just looked on Newegg.com you can get the 8180 which has 28 cores, for $11000. Which is like less then half the speed of the 3990x. Which is $4k. So you need 2 xeons, at $22000 and dual motherboard add another 2k extra for setup costs, etc.

    So what would you have one Xeon 8280 server with 2 process for $24k and 128GB * 6 Ram
    or
    6 full Xeon 3990x Threadrippers servers each with 128-258GB of ram

    Option 2 gives you literally 7-8 times the rendering power for the same price? I mean, seriously.
    No use, you have no idea about hardware if you think that a machine that is destroying a server 3 times the price.

    Yea it has a place, under my bloody desk, or terradici'd from my closest.

    Again, Blur did brilliant work on Dark Fate, a heavy CG movie, no problem with a server room full of these babies.

    And that is not even talking the fact that the upgrade path for the x3990 has much more potential with a x3999 future, vs the Xeon which is basically on a beast of a die that consumes twice the power consumption for less rendering speed.

    Seriously. Even Premiere benchmarks fall to this and the Ryzen 3950X beast as well vs inteal.

    It is amazing how people just refuse to admit AMD is winning...
  • Santoval - Sunday, February 9, 2020 - link

    It depends on how you define "enthusiasts". If you mean enthusiast *creators* who need a workstation for their work then sure, that's the CPU for them. Video editors, photographers, graphics designers, industrial designers, game designers ... these kinds of creators. It's not just for playing games or merely running benchmarks though. Even for a professional musician it might be overkill.
  • WaltC - Friday, February 7, 2020 - link

    I found this article a bit baffling, frankly. I did not understand the "out of chaos" titling at all...;) But anyway--it should be obvious what AMD is doing here--people running desktops for gaming running Win10 home or Pro are *not* the people the CPU is aimed at--the CPU is aimed at Prosumers who would rather not spend $20k for Intel's inferior solutions but would rather spend $4k for a faster cpu solution and save a cool $16k in the bargain and come out with something appreciably faster. Yes, people are going to run this with Enterprise--duh...;) You aren't going to spend money on a 128t cpu and then run it with a 64t OS--don't even know why Win10 and Win10 Pro were mentioned at all--other than to state they shouldn't be used with the CPU--which would take but a single sentence. Then there the handful of benchmarks used here--how many threads do each of these benchmarks support at maximum? Article didn't say--so that was sort of a strike out, etc. I think Anandtech needs to come back and do this review properly--as it stands, this one makes it seem like the only "chaos" involved is the obvious confusion in the minds of the AT reviewers....;) (No offense) Simply put: if Intel couldn't sell $20k cpu systems Intel wouldn't make them--so obviously, there's a market for 128t cpus--again, duh. You can do much better than Intel at a fraction of the cost--and there's your market! No chaos at all. Also: this CPU is very new--there remain the usual AGESA bios improvements that need to be made in the upcoming months, etc. That fact should have garnered at least a sentence, don't you think? In the past I've seen much better reviews than this--especially for the world's first and only 128t single CPU!
  • WaltC - Friday, February 7, 2020 - link

    Above I meant to say that "In the past I've seen much better reviews from AT,"--you guys going to get a decent editing system for the news section anytime soon?
  • Irata - Friday, February 7, 2020 - link

    Good point. Check OEM workstations like the Dell Precision 7920 and what is the installed OS ? Windows 10 for workstations. And that's for the lowest end 6C6T Xeon Bronze model.

    The OS version's name kinda gives it away.
  • Irata - Friday, February 7, 2020 - link

    You could use a car comparison test as an analogy: if you are comparing a two seater to a sedan and your conclusion is that the sedan's passenger seat is more spacious, you are missing an important point - the sedan has space for three passengers, the two seater only for one, i.e. you can do things with the sedan that you cannot with the two seater.
  • 29a - Friday, February 7, 2020 - link

    They always halfass AMD reviews, just look at EVERY Ryzen release.
  • sandtitz - Saturday, February 8, 2020 - link

    You have some good points there.

    No software can scale up to infinite number of threads, and is 128-way already beyond some of the software tested? Some numbers saw regressions for whatever reasons.

    I appreciate this article mostly for the Windows 10 Pro vs. Workstation/Enterprise benchmarking since I always thought the difference was in licensing and max CPU/Mem support.

    I'm sure there are going to be enthusiasts and business users who have a need for 64 core CPU and wouldn't know the difference between Windows Pro and Workstation and would just go for the cheaper if the hardware doesn't surpass what the Pro license allows:

    I've delivered some fully loaded HP Zbook laptops to end users and they had the Win 10 Workstation license from the factory. Since the CPU (E-2186M) nor the memory (64GB) didn't even approach the Pro limits I was a bit perplexed but didn't think too much of it. Perhaps HP engineers had internally benchmarked and found out speed differences?
  • jospoortvliet - Sunday, February 9, 2020 - link

    The real question is why anyone expects a consumer os to do well with such a cpu... even the workstation version of Windows is a joke when compared to the Linux performance: https://www.phoronix.com/scan.php?page=article&...

    A 30-60% difference is no joke, and shows how big the gap between win and Lin still is. This cpu is simply too “pro” for Windows...
  • sandtitz - Sunday, February 9, 2020 - link

    Well, that's where the Win10 Pro Enterprise/Workstations comes to play.

    Had you read this Anandtech article you'd see how much faster it is than the plain Win10Pro.

    Mr. Larabel didn't use the Enterprise version for testing. This is quite understandable since Microsoft doesn't make it clear that there is a tremendous performance boost.
  • tuxRoller - Saturday, February 15, 2020 - link

    https://www.phoronix.com/scan.php?page=article&...

    While this is using Clear Linux as reference, its advantage over Windows Enterprise ranges from 7-29% (geometric mean) with 16 - 64(+SMT) cores, respectively.
  • valinor89 - Monday, February 10, 2020 - link

    The baffling tittles and subtittles are references to Sun Tzu's "The art of war", I believe.
  • alysdexia - Monday, May 4, 2020 - link

    i9-9900T is more efficient and thriftier than Threadrippers.
  • boozed - Saturday, February 8, 2020 - link

    I think the more appropriate description for that is "provocative", and not in the intellectual way.

    Goading, perhaps.
  • BenSkywalker - Saturday, February 8, 2020 - link

    Not the one who made the original comment but.... RAM?

    512 GB max..?

    Just checked Dell for lower tier workstations then what this is purported to be competing against and they offer 6TB of RAM. 2GB per thread is meh when looking at netbooks.

    There must be some usage scenario for this processor with its extremely limited memory capacity and very high thread count, I just can't think of what it would be that isn't already done on GPUs/tensor hardware/vector co processors.
  • Cooe - Sunday, February 9, 2020 - link

    No single socket workstation can handle 6TB of memory.... Xeon-W is limited to like 1.5TB.
  • BenSkywalker - Tuesday, February 11, 2020 - link

    3TB single socket- https://www.newegg.com/p/N82E16813183686

    If you could show me a 3TB counterpart for this chip, despite being considerably less RAM per core, it would at least make sense. 512MB for a 128 thread CPU...
  • BenSkywalker - Tuesday, February 11, 2020 - link

    512GB even....
  • twtech - Saturday, February 15, 2020 - link

    Rendering (some are still CPU-only), code compiling are two that come to mind.
  • alufan - Friday, February 7, 2020 - link

    bet you have trouble sitting right now and for the next year or so at least
  • DigitalFreak - Friday, February 7, 2020 - link

    Another useless post from timecop1818

    See, it's do easy anyone can do it!
  • Cellar Door - Friday, February 7, 2020 - link

    Can Anandtech finally implement a rank based comment system with admins please?

    So we don't get obvious Intel propaganda like this, literally any idiot can write whatever they want as the top comment here.
  • eek2121 - Friday, February 7, 2020 - link

    I get the feeling that AnandTech is operating on a shoestring budget. For a very modest fee I know I personally could provide them with a much better platform.
  • OddFriendship8989 - Wednesday, February 12, 2020 - link

    I propose that Anand contribute his next set of RSUs that will vest
  • alpha754293 - Friday, February 7, 2020 - link

    A useless statement from the sensai on uselessness.
  • TEAMSWITCHER - Friday, February 7, 2020 - link

    I think it's a useful processor...
    It's just very, very, very, very expensive... One "very" for each $1000.
  • FreckledTrout - Friday, February 7, 2020 - link

    Expensive? This CPU is dirt cheap for a 64 core. You go Intel and have to pay 20K for anything close and that only gets you 56 cores and this CPU beats that dual CPU config. So for those that need the cores this is freaking cheap.
  • unclevagz - Friday, February 7, 2020 - link

    Except that the number of workstation-oriented programs that can meaningfully use more than 64 threads, as evidenced by the testing above, is vanishingly small. For most professionals it'd still make more sense to just buy a 3970x
  • jospoortvliet - Sunday, February 9, 2020 - link

    Does also depends on the os - Windows isn’t really suitable for this level of performance. https://www.phoronix.com/scan.php?page=article&...
  • kgardas - Monday, February 10, 2020 - link

    This is using WX2990 with it's 4 NUMA nodes (IIRC) and just 2 having access to RAM. This is completely different beast than current TR3 gen which is just single NUMA node and where all cores do have similar access to RAM. Windows should work well on this, but you will just need to use those supporting that number of cores/threads.
  • FreckledTrout - Sunday, February 9, 2020 - link

    I think we can agree anyone buying this CPU knows exactly what they will be doing with it.
  • beaker7 - Friday, February 7, 2020 - link

    just ordered
  • prisonerX - Friday, February 7, 2020 - link

    Another useless holder of useless Intel stock. Keep shilling.
  • james4591 - Friday, February 7, 2020 - link

    It's only useless to people who don't know how to use it.
  • SanX - Friday, February 7, 2020 - link

    Useless is probably Ian's own test of 3D particle movement which demonstrates just one single feature of AVX instruction set. Was there any real life or synthetic tests which use AVX512 or AVX256 to get a clue what improvement it really gives? Watching pure single instruction speed improvement having 12x speed boost is pathetic has no sense. Compile with and without AVX and show us if it gets any meaningful speedup at east from something useful like Gauss elimination Ax=B solver
  • nsmeds - Saturday, February 8, 2020 - link

    For AVX influence on Gaussian elimination take a look at HPL. It does have a huge impact as the matrix update in every reduction step fits extremely well to AVX512. See http://www.top500.org If you want to experiment there are implementations available from Intel (MKL) and AMD (BLIS, libFLAME and HPL-FLAME from github) see eg
    https://www.google.com/url?sa=t&source=web&...

    Several scientific workloads fit well to AVX512 usage but definitely not all. Adapting a code to effectively use AVX512 can be labour intensive though and for research purposes may make the code harder to adapt for researchers. It may be more important that researchers can implement new ideas easily than run at optimal efficiency. And only for large problem sizes the effort of putting data nicely in memory may be amortized by the the speedup from AVX512.
  • realbabilu - Saturday, February 8, 2020 - link

    Just checking the Openblas, Intel MKL, and Blis with lapack DGETRI inverse matrix and Crout inverse of fortran polyhedron benchmark. The march=core-avx512 and march=core-avx2 help the calculation faster. From 30s to 17s in 9750H.
  • nt300 - Friday, February 7, 2020 - link

    Another phenomenal processor from AMD.
    Catering to a market that wants such processors.
    Anybody claiming these are useless don't understand the Computer Industry.
  • AshlayW - Saturday, February 8, 2020 - link

    Another useless comment from a tool.
  • Zak90 - Saturday, February 8, 2020 - link

    ?timecop1818 "Another useless processor from AMD"

    -->Idiot!!!
  • levizx - Saturday, February 8, 2020 - link

    Another waste of food water and air waste skin suit spotted
  • evernessince - Sunday, February 9, 2020 - link

    AnandTech would be better off disabling article comments if they aren't going to bother moderating blatant trolls.
  • nt300 - Sunday, February 9, 2020 - link

    Another Superior Processor from AMD.
    Bar None, AMD annihilates Intel in everything. AMDs price/performance is KING.
  • Xyler94 - Monday, February 10, 2020 - link

    Intel still wins in AVX512 and AI VNNI loads. AMD's got the brute force crown, which brute forcing your way through everything is still great, but if your workload uses 90% AVX512 or AI with VNNI, Intel would be more suited.

    But that's the great thing about competition, we now have choices, and it's no longer "Just get Intel".
  • Spunjji - Wednesday, February 12, 2020 - link

    A GPU or NPU would be even more suited to those workloads, though. AVX-512 is a weird middle-ground.
  • yetanotherhuman - Monday, February 10, 2020 - link

    I don't follow. It's not a gaming processor. It is, however, the fastest workstation/HEDT chip that exists. That's clearly not useless.
  • Spunjji - Monday, February 10, 2020 - link

    Another useless comment from timcarp.

    A moderator, a moderator, my kingdom for a moderator...
  • babadivad - Tuesday, February 11, 2020 - link

    How can you read this and not come away wih the knowledge that this is the most powerful CPU on the planet?
  • kardonn - Tuesday, February 11, 2020 - link

    I pre-ordered it, easiest hardware purchase decision of my life. My current fleet of workstations are dual 18C Xeons, they're done very well for me over the last 5-6 years but single core performance is a big deal in workstations...the 3990X really outshines the Xeons on those tasks, and then when it comes to multithreaded tasks it's just an absolute bloodbath.

    I have never owned AMD hardware in my life before this because Intel was always the best decision to buy if you need raw CPU power the way I do. Now AMD is king though, and I'm no fanboi or brand loyalist...I buy whatever is best.

    There's a reason AMD has been picking up a lot of server market share and workstation market share. They're making the CPUs that everyone wants right now.
  • spicemuthaf - Sunday, June 14, 2020 - link

    My friend,everything is useless if you don't know what to do with it
  • edsib1 - Friday, February 7, 2020 - link

    If you're going to run server benchmarks - especially with >32 cores, then use Linux/Unix. What self respecting mission critical business runs on windows server?
  • bloinkXP - Friday, February 7, 2020 - link

    That's not really an accurate statement. I work for one of the largest insurance companies in the world and we are 99% Windows. The days of Linux having more stability than Windows are long since over. As a matter of fact due to various compliance reporting we have to reboot all servers monthly (for patching) so even the famous "uptime" metric is largely useless. Our Windows platforms are very stable and handle the applications that our business requires (SQL Server/SP/.NET...etc)
  • vanilla_gorilla - Friday, February 7, 2020 - link

    > The days of Linux having more stability than Windows are long since over.

    lol
  • FunBunny2 - Friday, February 7, 2020 - link

    "That's not really an accurate statement."

    well, does the IBM z mainframe run Windoze applications? industrial strength RDBMS are much happier on *nix. even Sql Server, of course.
  • Whiteknight2020 - Friday, February 7, 2020 - link

    No, it generally runs Z/OS. SQL server IS an industrial strength database, just doesn't need a ton of admins to keep it running, because it just works.
  • 29a - Friday, February 7, 2020 - link

    Your example doesn't use Linux as it's OS.
  • FunBunny2 - Friday, February 7, 2020 - link

    sure it does. just go look.
  • FunBunny2 - Friday, February 7, 2020 - link

    "IBM z/VM supports Linux, z/OS®, z/VSE® and z/TPF operating systems on IBM Z® and LinuxONE™ servers. It can host thousands of virtual servers on a single system."

    from IBM, of course.

    "z/VSE (Virtual Storage Extended) is an operating system for IBM mainframe computers, the latest one in the DOS/360 lineage, which originated in 1965."
    [the wiki]

    now, that's long term support.
  • jospoortvliet - Saturday, February 8, 2020 - link

    Monthly reboots? So you keep running with known security issues an average of 15 days? Modern linux (the supported kind, RHEL or SLES) has live kernel patching these days - so you are secure without reboots. It continues to baffle me why companies are willing to suffer the added complexity, performance and security hit for the privilege of running what is still a desktop OS on servers. Is it really just because management runs it on their laptops and thinks the familiarity is worth it?
  • jospoortvliet - Sunday, February 9, 2020 - link

    On that performance hit, if anyone was wondering still - 30-60%: https://www.phoronix.com/scan.php?page=article&...
  • Korguz - Sunday, February 9, 2020 - link

    looks like it ran the test vs win10 pro, not workstation as some one else already replied to you.... did you read the article here??
  • tuxRoller - Saturday, February 15, 2020 - link

    https://www.phoronix.com/scan.php?page=article&...
  • Korguz - Saturday, February 15, 2020 - link

    i guess you didnt read it either........
  • tuxRoller - Saturday, February 15, 2020 - link

    Funny.

    "WINDOWS 10 ENTERPRISE for this renderer was also performing much better than WINDOWS 10 PROFESSIONAL up until hitting 128 threads."

    My question is why didn't you at least click on the link?
  • Korguz - Saturday, February 15, 2020 - link

    then something has changed, as farther up, another poster made a post, and posted the SAME link as you did, and at the time, it looks like Mr Larabel didnt use anything other then win 10pro, as Sandtitz posted in reply to another that posted the SAME link then :
    Well, that's where the Win10 Pro Enterprise/Workstations comes to play.

    Had you read this Anandtech article you'd see how much faster it is than the plain Win10Pro.

    Mr. Larabel didn't use the Enterprise version for testing. This is quite understandable since Microsoft doesn't make it clear that there is a tremendous performance boost.

    as i had read the link when jospoortvliet posted it, it didnt state that the review used anything other then win 10 pro.

    so maybe the original review was updated since then.
  • tuxRoller - Monday, February 17, 2020 - link

    I don't believe anything changed. The earlier poster linked to an older article (https://www.anandtech.com/comments/15483/amd-threa... linked to https://www.phoronix.com/scan.php?page=article&... )
  • tuxRoller - Monday, February 17, 2020 - link

    Eek, sorry, I prematurely posted:)

    The link texts are pretty much identical save "3990x" vs "2990wx".

    The last thing I wanted to mention was that Enterprise didn't perform significantly better than pro (this might be due to it having been patched).
  • tuxRoller - Monday, February 17, 2020 - link

    https://www.phoronix.com/scan.php?page=article&...

    This article was just posted and compares W10 pro & Enterprise vs a number of different distros. Again W10P ≈W10E
  • Thanny - Saturday, February 8, 2020 - link

    You don't need to reboot a Linux server to patch it.

    It's a consequence of how the file systems work. In Linux, the name and location of the file are distinct from its contents. You can unlike an open file from the directory, create a new one with the same name, and the open file will continue functioning until it's closed. You can update everything in Linux outside the kernel without rebooting (and even that with a bit of prep work).

    So you're using a weakness of Windows as an excuse for the inferior stability of Windows mattering less.
  • PeachNCream - Monday, February 10, 2020 - link

    There is some argument for an occasional restart in the case of long-lived processes that retain older, unpatched binaries in memory due to in-flight workloads. A periodic restart will address that, but in general it is absolutely true that Linux does not really require reboots in order for patches to take effect.
  • clsmithj - Thursday, February 13, 2020 - link

    Linux IS more stable and it runs my Ryzen Threadripper 2990wx much better than Windows 10 Pro for Workstation I have it dual booting to with Fedora 31.
  • baka_toroi - Friday, February 7, 2020 - link

    If you get out of your bubble you'd realize most of them. What a useless comment.
  • rrinker - Friday, February 7, 2020 - link

    Oh I dunno, our not very large consulting firm has thousands of clients who all run Windows infrastructure.... Though the #1 use of servers with > 32 cores is running VMWare hypervisor with Windows servers as guests on top of that.
  • velanapontinha - Friday, February 7, 2020 - link

    far too many to mention in a comment
  • andrewaggb - Friday, February 7, 2020 - link

    We use both. Windows server is fine. Very long support, much longer than Ubuntu lts, and it's stable.
  • reuthermonkey1 - Friday, February 7, 2020 - link

    I'm a Linux dude through and through, but most companies I've worked for use a lot of Windows for their backend systems. I think it's a bad idea, since dealing with Windows Server adds quite a bit to overall costs and complexity, but the financial folks demand it so they pay for it.
  • FunBunny2 - Friday, February 7, 2020 - link

    "the financial folks demand it so they pay for it."

    because they've been slaves to Office for decades. no other reason.
  • Ratman6161 - Friday, February 7, 2020 - link

    There is cost and then there is cost. What does that mean? Well, I'm in a highly regulated environment where if an auditor saw a system that wasn't under manufacturers support, that's an automatic fail. So lets all please get the word "free" out of our vocabulary for this discussion. Linux is definitely not free. The companies that supply Linux distros just bill you for it using a very different licensing model than Microsoft does. To find the real costs, you have to figure the total cost of a system including all hardware and software and all costs associated with each. When you do that, a couple of things become obvious. 1) The cost of hardware is relatively trivial when compared with software licensing. 2) the cost of the operating system, regardless of what that OS is, is also relatively trivial though generally speaking we find that fully supported Linux and Windows end up costing very close to the same. 3) The big costs are for the software that runs on top of the hardware and OS. So saving costs by using Linux is essentially a fantasy.

    RRINKER also had a great point...the vast majority of Windows servers these days are virtual. We tend to have large numbers of small Windows servers dedicated to a particular task. We don't really find this adding complexity.
  • Whiteknight2020 - Friday, February 7, 2020 - link

    Windows server is way cheaper to administer, configuration with group policy, DSC etc is way easier than messing with ansible, puppet etc. TCO is lower as windows admins are cheaper, SA licensing is on par or cheaper than RHEL/Oracle UK, server core is rock solid. I use & deploy both Linux & Windows, whichever the application runs best or more stable on.
  • dysonlu - Friday, February 7, 2020 - link

    It's cheaper the same way it is cheaper to outsource. The cost is hidden and usually comes later.
  • PeachNCream - Monday, February 10, 2020 - link

    This is the best approach. The underlying OS should be whatever is best suited to the task it is expected to perform within the limits of the costs running it incurs. Of course, figuring out what might be the best balance between costs and performance can be tricky and a lot of companies do not dedicate the resources to examine options, simply defaulting to something familiar while assuming it is the best choice..
  • dysonlu - Friday, February 7, 2020 - link

    Enterprise use Windows because they are pretty tech-illiterate and needs Microsoft support. Nobody will get fired for selecting Microsoft and Windows, even if it'll cost more and your whole IT will be at the mercy of Microsoft.

    But, kickbacks help a lot in the decision making.
  • zmatt - Friday, February 7, 2020 - link

    Many. I would argue most actually. There are certainly some areas where Linux really shines but one place where they aren't just behind but completely non existing is competing with active directory. Most offices still use AD domains, and for good reasons, and Linux doesn't have an answer to it.

    We have a few VM clusters that run redundant DCs. Its the only option because active directory is unique. It isn't perfect, but nobody offers a competing solution. Someone could develop an open source competitor but nobody has.
  • nightmared - Friday, February 7, 2020 - link

    While I have to admit Microsoft AD is fairly well integrated (with regards to features such as a folder redirections and GPOs) and coherent, there is alternatives (after all the core of AD resides in a "simple" LDAP server). The most compliant (because it is a re-implementation of the AD) is SAMBA4 and it works quite well. You can fairly easily manage a windows AD with it, free of charge (and it's open source, of course). Still not as pervasives as Microsoft AD with all its Powershell dedicated commands and its GUI managers.
  • Whiteknight2020 - Friday, February 7, 2020 - link

    But no group policy, integrated CA, recycle bin, DSC, third party ecosystem, gmsa etc. Not industrial strength, no support, no federation services....
  • jospoortvliet - Saturday, February 8, 2020 - link

    Check out Univention Corporate Server, they build quite the drop-in AD alternative.
  • tuxRoller - Friday, February 7, 2020 - link

    Freeipa
  • Whiteknight2020 - Friday, February 7, 2020 - link

    Is junk. Fundamentally badly designed, appalling to administer and weak on features. Nice try.
  • tuxRoller - Wednesday, February 12, 2020 - link

    Badly designed? Do you mean because it's mostly an orchestration tool?
  • Whiteknight2020 - Friday, February 7, 2020 - link

    RedHat have tried, but it's solution is pants. You can make Linux full citizens of AD with QAS though so you only need windows for the directory. Also does a nice job of certificate authority too.
  • Chaitanya - Friday, February 7, 2020 - link

    Many of my clients are running windows servers even in Datacentres.
  • 29a - Friday, February 7, 2020 - link

    Lots.
  • Hulk - Friday, February 7, 2020 - link

    I like what AMD is doing. 8, 16, 24, 32, and 64 cores based on the same architecture. If you have the need for the compute and the cash they have you covered. Not to mention the fact that they've totally blown the lid off Intel's stratospheric pricing. If not for AMD I firmly believe 8 core parts would still cost $1000 or more.

    My next build is going to be my first AMD. Unless Intel can pull a rabbit out of their hat my next build is going to be my first AMD build.... and I've been building since the early 1990's.
  • ZoZo - Friday, February 7, 2020 - link

    If not for Intel, they would probably also cost at least $1000. It takes 2 for competition.
  • eva02langley - Friday, February 7, 2020 - link

    Once again software developers are late to the game. MS really needs to upper their game with their OS division because one day, they will lose that monopoly for good. If it was not for the gaming industry, Windows would probably not be where it is today.
  • extide - Friday, February 7, 2020 - link

    I mean you can clearly see that Windows supports it just fine -- you just have to go for the Workstation/Enterprise version. It's not like Windows itself is totally behind the times.
  • Kevin G - Friday, February 7, 2020 - link

    The hard work is indeed done but not configured for the more mundane version of Windows where this certainly fits into the established licensing models: this is a single socket system and NUMA is not necessary here. A simple patch would fix things here.

    Then again as this article points out, MS didn't fix that a Xeon Phi 72xx with up 288 threads would appear as a five socket system. I would imagine that such a workstation too would have benefited from applications recognizing that it could have a single NUMA node (this was configurable in hardware).
  • drothgery - Friday, February 7, 2020 - link

    And some quick googling shows Win 10 Pro Worksation is less than 10% of the cost of this CPU alone, so it's not like it'd be a big deal to anyone who actually bought one.
  • Thanny - Saturday, February 8, 2020 - link

    The Windows kernel is still badly broken when it comes to complicated NUMA scheduling. That's why the 2970WX, 2990WX, and all first-gen EPYC chips (with four dies) perform relatively badly under Windows, but quite well under Linux.

    The 64-thread limitation is quite mild compared to that problem.
  • FunBunny2 - Friday, February 7, 2020 - link

    "If it was not for the gaming industry, Windows would probably not be where it is today."

    not in corporate, it's Office.
  • Makaveli - Friday, February 7, 2020 - link

    Was just going to post this. I know everyone is all over Gaming and RGB. however that means nothing in the enterprise market.

    Microsoft get more revenue from office alone than probably the whole Xbox division and anything they get on the PC gaming side.
  • duvjones - Friday, February 7, 2020 - link

    To be fair, a chip like this is not something that Mircosoft could predict coming in the x64 space. Which is what it giving Linux (and really any POSIX system) it's advantage, this kind of power and core count use to be reserved for the academic corners of high-end computing about 15-20 years ago.... Where Windows simply doesn't apply.
    They manage now, but... Mircosoft's is only making do with a workaround. They will have to address it at some point, the question is when.
  • Whiteknight2020 - Friday, February 7, 2020 - link

    Yeah, because Windows server only supports 64 sockets and unlimited cores.....
  • GreenReaper - Saturday, February 8, 2020 - link

    64 sockets, 64 cores, 64 threads per CPU - x64 was never intended to surmount these limits. Heck, affinity groups were only introduced in Windows XP and Server 2003.

    Unfortunately they hardcoded the 64-CPU limit in by using a DWORD and had to add Processor Groups as a hack added in Win7/2008 R2 for the sake of a stable kernel API.

    Linux's sched_setaffinity() had the foresight to use a length parameter and a pointer: https://www.linuxjournal.com/article/6799

    I compile my kernels to support a specific number of CPUs, as there are costs to supporting more, albeit relatively small ones (it assumes that you might hot-add them).
  • Gonemad - Friday, February 7, 2020 - link

    Seeing a $4k processor clubbing a $20k processor to death and take its lunch (in more than one metric) is priceless.

    If you know what you need, you can save 15 to 16 grand building an AMD machine, and that's incredible.

    It shows how greedy and lazy Intel has become.

    It may not be the best chip for, say, a gaming machine, but it can beat a 20-grand intel setup, and that ensures a spot for the chip, not being useless.
  • Khenglish - Friday, February 7, 2020 - link

    I doubt that really anyone would practically want to do this, but in Windows 10 if you disable the GPU driver, games and benchmarks will be fully CPU software rendered. I'm curious how this 64 core beast performs as a GPU!
  • Hulk - Friday, February 7, 2020 - link

    Not very well. Modern GPU's have thousands of specialized processors.
  • Kevin G - Friday, February 7, 2020 - link

    The shaders themselves are remarkably programmable. The only thing really missing from them and more traditional CPU's in terms of capability is how they handle interrupts for IO. Otherwise they'd be functionally complete. Granted the per-thread performance would be abyssal compared to modern CPUs which are fully pipelined, OoO monsters. One other difference is that since GPU tasks are embarrassing parallel by nature, these shaders have hardware thread management to quickly switch between them and partition resources to achieve some fairly high utilization rates.

    The real specialization are in in the fixed function units for their TMUs and ROPs.
  • willis936 - Friday, February 7, 2020 - link

    Will they really? I don’t think graphics APIs fall back on software rendering for most essential features.
  • hansmuff - Friday, February 7, 2020 - link

    That is incorrect. Software rendering is never done by Windows just because you don't have rendering hardware. Games no longer come with software renderers like they used to many, many moons ago.
  • Khenglish - Friday, February 7, 2020 - link

    I love how everyone had to jump in and said I was wrong without spending 30 seconds to disable their GPU driver and try it themselves and finding they are wrong.

    There's a lot of issues with the Win10 software renderer (full screen mode mostly broken, only DX11 seems supported), but it does work. My Ivy Bridge gets fully loaded at 70W+ just to pull off 7 fps at 640x480 in Unigine Heaven, but this is something you can do.
  • extide - Friday, February 7, 2020 - link

    No -- the Windows UI will drop back to software mode but games have not included software renderers for ~two decades.
  • FunBunny2 - Friday, February 7, 2020 - link

    " games have not included software renderers for ~two decades."

    which is a deja vu experience: in the beginning DOS was a nice, benign, control program. then Lotus discovered that the only way to run 1-2-3 faster than molasses uphill in winter was to fiddle the hardware directly, which DOS was happy to let it do. it didn't take long for the evil folks to discover that they could too, and virus was born. one has to wonder how much exposure these latest GPU hardware present?
  • PeachNCream - Monday, February 10, 2020 - link

    Computer viruses predate Lotus 1-2-3.
  • FunBunny2 - Tuesday, February 11, 2020 - link

    the point is: 1-2-3 brought the effort mainstream, by showing how DOS was just a sieve to the hardware. recall that the PC with DOS was only one of three OS available, and PC sales didn't matter much until Corporate figured out that they just had to have 1-2-3. Mitch made Bill rich, not Bill. until 1-2-3, M$ was a legit systems software maker. after that, not so much. Xenix was they're OS of the future.
  • FunBunny2 - Tuesday, February 11, 2020 - link

    ... and, for the PC, not according to this history: https://content.sentrian.com.au/blog/a-short-histo...
    "The first computer virus for MS-DOS was “Brain” and was released in 1986. It would overwrite the boot sector on the floppy disk and prevent the computer from booting. It was written by two brothers from Pakistan and was originally designed as a copy protection."

    learned how to do that from 1-2-3
  • Khenglish - Friday, February 7, 2020 - link

    Here's Unigine Heaven software rendered:

    https://i.imgur.com/0dfV4pd.png
    https://i.imgur.com/CEWhX31.png

    Fun fact: turning on tessellation drops fps by a factor of about 20.
  • Spunjji - Monday, February 10, 2020 - link

    Holy cow, I had no idea.

    I'd be interested (as a purely theoretical exercise) to see where the ideal performance balance of cores / clock speed / memory bandwidth falls when it comes to software rendering.
  • GreenReaper - Saturday, February 8, 2020 - link

    They use DirectX on Windows, and then Microsoft provides the fallback renderer.
  • Mikewind Dale - Friday, February 7, 2020 - link

    That might actually be an interesting test for someone who wants to run legacy games that don't support newer versions of Windows, DirectX, and/or don't have graphics driver support.

    For example, I was trying to play the original Diablo before the GoG version came out. It didn't work on my Radeon RX580, so I had to set up a VMWare Workstation virtual machine, with 3D acceleration support. However, even though VMWare Workstation supports 3D acceleration, it's still using my CPU, not my GPU. It's just that the virtual OS has software DirectX acceleration.

    Anyway, I benchmarked 3DMark2001 SE running in a Window XP virtual machine on my 8-core Ryzen 7 2700X. I actually got scores that were competitive with GPUs from the early 2000s. So my software 3D acceleration on a Ryzen 7 2700X was approximately the same speed as a GPU from circa 2001.

    It would be interesting to see how well a 64 core processor does.
  • Khenglish - Friday, February 7, 2020 - link

    I get 5947 with a 3920xm (full 4c/8t ivb with 8MB cache) at 4.3 GHz. I would expect your 2700x to be a bit more than double that.

    https://i.imgur.com/aeQcFuu.png
  • Mikewind Dale - Saturday, February 8, 2020 - link

    I'm getting about 6800. So perhaps the VMWare Workstation software display device cannot fully take advantage of parallelization?
  • Spunjji - Monday, February 10, 2020 - link

    That or it's not as efficient as Microsoft's software layer at translating DirectX code into something that can run on the CPU. If you had the time, you could try running 3DMark 2001 natively on your system the way Khenglish is and see if there's a difference.
  • lipscomb88 - Sunday, February 9, 2020 - link

    Ltt showed crysis running on a software renderer on a 3970x and a 3990x. Definitely a difference between those two chips but it still chugs at times. Really cool to see.

    At some point, a high cord count cpu mimics the parallelization in gpus well enough to render well.
  • Spunjji - Wednesday, February 12, 2020 - link

    It's notable in that video that the vast majority of the cores flicker around 2-5% utilisation; it looks like there's still a significant bottleneck besides the sheer number of cores for processing.
  • ZoZo - Friday, February 7, 2020 - link

    Better grap this one before it is replaced by the 4990X at $4990.
  • Irata - Friday, February 7, 2020 - link

    Ian and Gavin: Thanks for the review and particularly the Windows version analysis.

    While I agree with your conclusions, I have a suggestion for future high core count CPU reviews:

    How about trying to run several things at once, i.e. A game while the CPU is rendering, rendering while compiling....

    Perhaps there are actual use cases that could apply where you run several demanding tasks at once that could not be done so far since the CPU power was not there.
  • Hulk - Friday, February 7, 2020 - link

    I second this suggestion. One thing that annoys me with my 4770k is that if I'm rendering a video using Handbrake and trying to work on an audio project in Presonus Studio One there isn't enough compute for Studio One so it's all distortion. But realistically 12 cores would probably do this for me;)
  • Irata - Friday, February 7, 2020 - link

    I remember seeing one review for TR3 (the 32c version) that die a multi tasking stress test which was very interesting.

    Afair it was on Adoredtv but another reviewer did it.
  • DannyH246 - Friday, February 7, 2020 - link

    Compiling was asked for in previous workstation class CPU reviews, and many people asked for it for AMD's 16 core Ryzen release....instead we get a gaming benchmark where they show Intel's 8 core CPU winning. What do you expect from IntelTech.com.
  • Thanny - Saturday, February 8, 2020 - link

    That used to be routine in the early days of multi-core CPU reviews.

    Seems these days everyone has forgotten about the concept of multitasking.
  • alpha754293 - Friday, February 7, 2020 - link

    I'm currently in discussions/in the works of getting a system put together in order to replace my four-node micro-cluster with either one or two of these AMD 3rd gen Threadripper systems.

    The price-per-performance is too compelling of a story for me NOT to dump my entire micro-cluster now and switch over to this.
  • eastcoast_pete - Friday, February 7, 2020 - link

    Thanks Ian and Gavin! While the business cases for this 64 core TR CPU are limited, video editing and software-based encoding are two of them. A lot of people don't realize that a lot of video is already shot in 8K 60p, and those RAW files are enormous and tax any CPU, even this beast. Also, some of these editing suites either already have patches available, and apparently one of two of them are from AMD. So, not the CPU for gaming, but it has a place for certain tasks.
  • extide - Friday, February 7, 2020 - link

    "All the Threadripper 3000 family CPUs support a total of 64 PCIe 4.0 lanes from the CPU, and another 24 from the chipset (however each of these use four of them to communicate with each other)."

    I thought they bumped the CPU <--> Chipset connection up to 8 lanes on this platform. Is that a typo or am I confused?
  • Slash3 - Friday, February 7, 2020 - link

    You are correct.
  • Valantar - Friday, February 7, 2020 - link

    Great review, love the broad perspective and testing across different OSes! An error though: "All the Threadripper 3000 family CPUs support a total of 64 PCIe 4.0 lanes from the CPU, and another 24 from the chipset (however each of these use four of them to communicate with each other" - this is wrong for TRX40; the CPU and chipset both have 8 PCIe lanes dedicated to communication that do not count in the total. Source: https://www.anandtech.com/show/15121/the-amd-trx40...
  • dwade123 - Friday, February 7, 2020 - link

    Terrible performance scaling from 32 cores to 64 cores. Even prosumers won’t benefit much from that many cores. And the price tag... Ouch. 3000 series will be the worse selling Threadripper easily.
  • RSAUser - Friday, February 7, 2020 - link

    Scaling looks pretty good, take the clock speed difference into account and a little bit extra fo thread spawning and control, and it looks like a good 80%+ scaling for most multi threaded tasks.
  • FunBunny2 - Friday, February 7, 2020 - link

    " games have not included software renderers for ~two decades."

    clearly, only those with embarrassingly parallel problems will benefit from these sorts of chips. and, by embarrassingly parallel one means intra-application, and not just lots o innterWeb sessions.
  • FunBunny2 - Friday, February 7, 2020 - link

    oops. not the right quote: "3000 series will be the worse selling Threadripper easily."
  • Kjella - Friday, February 7, 2020 - link

    Obviously this particular processor is a low volume product, but they needed a workstation platform between AM4 and Epyc and since it's a halo product of a server chip it probably didn't cost AMD much to add it to the lineup. The biggest clue is probably that there's no 3980X, they're not fleshing out the lineup just making one extreme processor for bragging rights.

    But I wouldn't underestimate the number of people who can say "You're paying me >$100k/year to do this, if I'm 5% more efficient with a $4k processor it's worth it". They exist even though they're obviously not a mass market it's not just to showboat. That's on top of the PR value.
  • FunBunny2 - Friday, February 7, 2020 - link

    "You're paying me >$100k/year to do this,"

    at some point even the self-absorbed CEO class will realize that lots of those folks are engaged in non-producing overhead tasks. somethings are just not worth the costs saved.
  • monkeydelmagico - Friday, February 7, 2020 - link

    I think it's really cool that Ian got to set the price on this chip. Kudos.
  • kramik1 - Friday, February 7, 2020 - link

    If I am not mistaken all newer AMD CPUs support ECC. It just depends if the motherboard BIOS will support it and get QA for it. Some users on Reddit were saying that even some B450 boards worked with ECC. I would be surprised if the board you were testing with didn't support it. It is not a feature that AMD sells like Intel.
  • Ian Cutress - Friday, February 7, 2020 - link

    ECC might work, but it's not validated. There's a difference there.
  • Mikewind Dale - Saturday, February 8, 2020 - link

    I have a Gigabyte X470 Aorus Gaming 7 Wifi with a Ryzen 7 2700X and Kingston
    Kingston KSM26ED8/16ME (DDR 2666 ECC). The Gigabyte specifications page says it supports ECC. And indeed, when I run "cmd /k wmic memphysical get memoryerrorcorrection", the output indicates that ECC is working.

    So just check your motherboard's specs, and if it says it supports ECC, you should be good to go.
  • willis936 - Friday, February 7, 2020 - link

    I wonder if a linux host with a 128 thread windows client vm would have higher performance than running windows on bare metal.
  • Ratman6161 - Friday, February 7, 2020 - link

    Hmmm. could be interesting to install VMWare ESXi on it then create a VM with all processors assigned to it??
  • Mikewind Dale - Friday, February 7, 2020 - link

    Can I suggest you make a test where you run two instances of a given application? In many of these tests, 64 cores barely outperform 32 cores. However, that could mean that one instance of a given application has trouble using more than 32 cores. It may still be that two simultaneous instances of the same application could together use 64 cores effectively.

    For me at least, this is a realistic use case. I run statistical regressions in Stata, and one script file often contains dozens of different regressions to run. Now, Stata has a multicore version, licensed per core, which parallelizes the underlying linear algebra. But Stata also allows free trivial parallelization, in which each regression is run as a single-thread process, simultaneously. Stata does this by opening additional instances of itself in the background. So the user opens one instance of Stata, and then Stata opens an independent instance of itself in the background. Each regression is run on a different thread, in a different instance of Stata, and all the results are pooled together later.

    My suspicion is that even when an application cannot effectively use 64 cores in a single instance, running two instances of the same application at once would be able to use 64 cores. I'd like to see a test of this.
  • Slash3 - Friday, February 7, 2020 - link

    Small note, on page one in your Ryzen chart you list the 3950X as having only 32MB of L3 cache. As a dual chiplet CPU It has 4x16MB = 64MB of L3.
  • Slash3 - Sunday, February 16, 2020 - link

    ...still not fixed, guys.
  • Scipio Africanus - Friday, February 7, 2020 - link

    As others may have said, this is a halo product. If it makes money great, otherwise break-even or even a small loss is fine. Audi doesn't need its R8 to be a cash cow, BMW doesn't need the I8 to make big bucks, or Acura for the NSX to rake in the dough, they have their core offerings for that. But these products exist to give the consumer something to be wowed by for the brand.
  • iAPX - Friday, February 7, 2020 - link

    Just to be clear, 3990x is the king but 3970x is the best performance/price option?

    This is incredible, AMD took the crown and is now the clear leader on some markets.
  • GreenReaper - Saturday, February 8, 2020 - link

    I mean, if you're looking for pure price/performance, you probably want the 3960x (or, if you can stomach it, something much smaller like the Ryzen 2200G or Athlon 3000G).

    But yeah - for the 3990x, you're paying twice the 3970x, but never getting twice the performance, in part due to the power limit, but also due to scaling issues - some may be Windows-specific, but many are not. Heck, half the time it's no better at all - or worse.

    Personally, I look at the power rating (and also whether it can actually use all that power), although I guess it's possible to bin chips such that they are just not very efficient at a given speed. Cache can be very important as well - of course, that's part of the power rating. Usually you get a much better deal for not using the "full" CPU either, but one with defects - the tradeoff being limited capacity.
  • Korguz - Friday, February 7, 2020 - link

    just checked 2 local stores, the price for these is between $5250 and $5400... wow
  • Makaveli - Friday, February 7, 2020 - link

    Yup Canada Computers has it for $5,249 CAD

    So $1,259 Retailer markup.
  • Makaveli - Friday, February 7, 2020 - link

    Actually my bad there is no edit button.....

    $3990 USD = $5306.84 CAD
  • MattZN - Friday, February 7, 2020 - link

    Its on Amazon now for basically $4000.

    -Matt
  • Korguz - Friday, February 7, 2020 - link

    $4k US. the prices i mentioned and as makaveli noticed, were CDN :-)
  • Sahrin - Friday, February 7, 2020 - link

    If AMD can get a Zen 2 core to run at <3W@3.4GHz Intel is fucked.
  • Alistair - Friday, February 7, 2020 - link

    They already have. The new Ryzen 4800U.
  • Orkiton - Friday, February 7, 2020 - link

    so... there's nothing better for a runners up (Intel) than a pushy competitor (AMD)
  • HStewart - Friday, February 7, 2020 - link

    This is a honest generic CPU question and not directly related to this CPU except that it has 64 cores.

    I understand that 4 or even 8 cores are helpful for client machines, but I am wonder if 32 or 64 core is going too much to provide any effect especially in a single application with mostly visual user interface which too my knowledge is not really multi-threaded because there is a single resource which is the video screen
  • HStewart - Friday, February 7, 2020 - link

    One note on render farms, in the past I created my own render farms and it was better to use multiple machines than cores because of dependency of disk io speed can be distributed. Yes it is a more expensive option but disk io serious more time consuming than processor time.

    Not content creation workstation is a different case - and more cores would be nice.
  • MattZN - Friday, February 7, 2020 - link

    SSDs and NVMe drives have pretty much removed the write bottleneck for tasks such as rendering or video production. Memory has removed read bottleneck. There are very few of these sorts of workloads that are dependent on I/O any more. rendering, video conversion, bulk compiles.... on modern systems there is very little I/O involved relative to the cpu load.

    Areas which are still sensitive to I/O bandwidth would include interactive video editing, media distribution farms, and very large databases. Almost nothing else.

    -Matt
  • HStewart - Saturday, February 8, 2020 - link

    I think we need to see a benchmark specifically on render frame with single 64 core computer and also with dual 32 core machines in network and quad core machines in network. All machines have same cpu designed, same storage and possibly same memory. Memory is a question able part because of the core load.

    I have a feeling with correctly designed render farm the single 64 core will likely lose the battle but the of course the render job must be a large one to merit this test.

    For video editing and workstation designed single cpu should be fine.
  • HStewart - Saturday, February 8, 2020 - link

    One more thing these render tests need to be using real Render software - not PovRay, Corona and Blender.
    e
    I personal using Lightwave 3D from NewTek, but 3DMax, Maya and Cimema 3d are good choice - Also custom render man software
  • Reflex - Saturday, February 8, 2020 - link

    It wouldn't change the results.
  • HStewart - Sunday, February 9, 2020 - link

    Yes it would - this a real 3d render projects - for example one of reason I got in Lightwave is Star Trek movies, also used in older serious called Babylon 5 and Sea quest DSV. But you think about Pixar movies instead scenes in games and such.
  • Reflex - Sunday, February 9, 2020 - link

    It would not change the relative rankings of CPU's vs each other by appreciable amounts. Which is what people read a comparative review for.
  • Reflex - Saturday, February 8, 2020 - link

    Network latencies and transfer are significantly below PCIe. Below you challenged my points by discussing I/O and storage, but here you go the other direction suggesting a networked cluster could somehow be faster. That is not only unlikely, it would be ahistorical as clusters built for performance have always been a workaround to limited local resources.

    I used to mess around with Beowulf clusters back in the day, it was never, ever faster than simply having a better local node.
  • Reflex - Friday, February 7, 2020 - link

    You may wish to read the article, which answers your 'honest generic CPU question' nicely. Short version is: It depends on your workload. If you just browse the web with a dozen tabs and play games, no this isn't worth the money. If you do large amounts of video processing and get paid for it, this is probably worth every penny. Basically your mileage may vary.
  • HStewart - Saturday, February 8, 2020 - link

    Video processing and rendering likely depends on disk io - also video as far as I know is also signal thready unless the video card allows multiple connections at same time.

    I just think adding more core is trying to get away from actually tackling the problem. The designed of the computer needs to change.
  • Reflex - Saturday, February 8, 2020 - link

    That's not really the main bottleneck these days, I can fully saturate all 8 cores/16 threads on my existing Ryzen with jobs in Handbrake. That means the CPU is still the main bottleneck. That still seems to be the case even at 64/128, so again it's not I/O or disk although those do need improvement for other tasks.

    PCIe 5 is on the way and will help. Faster forms of storage memory are coming, Optane and so on even if its happening in fits and starts. Intel and AMD don't own that, and can only take responsibility for the parts they do own, primarily the CPU and chipset, and both are doing well there (especially AMD lately).
  • HStewart - Sunday, February 9, 2020 - link

    Still even with PCIe 5, I believe the application has single access to storage device.

    Handbrake is pure example of this one you missing the IO of reading the Optical drive in that case.

    A simple search on Internet including Handbreak forums shows that Handle does not handle more 6 cores correctly.
  • Reflex - Sunday, February 9, 2020 - link

    Uh, again I'm saturating all 8 cores and 16 threads. I have friends using Threadrippers to saturate far more cores than that. Handbrake goes beyond six cores easily.

    And who gives a damn about the IO of an optical drive? Who is using optical drives for this type of work? Do you even know how this software works and what its for? I'm working on large encoded files sitting on SSD's and encoding them in a target format. There is literally no I/O bottleneck there, most of the work is on the CPU, not the drive itself where I simply need to be able to read the start state and write the results as fast as the CPU can do the encoding.

    Seriously, there are a ton of workloads that aren't I/O limited, in fact most are not.
  • Korguz - Monday, February 10, 2020 - link

    reflex.. just give up.. hstewart will keep trying to argue his point has being right, while not having any tangible proof other then his own words.
  • HStewart - Saturday, February 8, 2020 - link

    This question is not in specific response to this cpu - but in general when more cores are added to system.
  • Korguz - Saturday, February 8, 2020 - link

    so that would also include intel systems, right ? hstewart ?
  • Xyler94 - Monday, February 10, 2020 - link

    No, Intel has the ultra super special *Insert random CPU extension* that mitigates any and all bad from adding more cores. Gosh, can't you tell?
  • DannyH246 - Friday, February 7, 2020 - link

    So a 4k chip absolutely obliterates 20k's worth of chips from Intel, yet apparently it's overpriced. LOLOLOL Intel and their fanboys are funny.
  • dwade123 - Friday, February 7, 2020 - link

    This tard is a good example of what a blind fanboy looks like. Only hyping and not buying, whereas actual potential TR buyers got priced out of the game by AMD. X399 owners are forced to migrate to peasant AM4 due to high prices lol
  • Irata - Friday, February 7, 2020 - link

    It appears you made a typo when you tried to post on wccftech.
  • dwade123 - Friday, February 7, 2020 - link

    Fact is most x399 owners can't afford the new TR. Only ones defending these shady prices are lowly AM4 users who love the brand to death. Therefore, overpriced.
  • Makaveli - Friday, February 7, 2020 - link

    lol the same X399 owners that were buying $500 motherboards?

    Can't afford this?

    Do you even know what you are talking about?
  • dwade123 - Friday, February 7, 2020 - link

    Another AM4 pleb trying to present X399 users. Most sold TR models were under $1k. TR 3000 series starts at $1500. Try again at shilling.
  • Korguz - Friday, February 7, 2020 - link

    dwade123 , sounds like you are happy paying intel for its cpus before Zen was released :-) the tables have turned and amd has the better cpus, and all of a sudden, its wrong to charge prices like this ??
  • deksman2 - Friday, February 7, 2020 - link

    This threadripper isn't necessarily targeted at mainstream consumers but rather small businesses and studios for content creation (such as VFX companies) - just like now, VFX companies went for 32 cores TR (most 'regular consumers' went for maybe 16 core/32 threads if they could afford it).

    And on that note... just who from the regular consumer market is able to afford $20,000 Xeon to begin with?
    The Xeon's are 5x more expensive in comparison.
    So, on a cost scale alone, which CPU do you think would be more accessible?
    The Xeon's or 3990x?

    Also, as the article points out, the software is having problems with scaling beyond 32 cores properly to begin with.
    So, most 'regular consumers' who can afford TR will likely go for the 32 core/64 thread version from the Zen 2 family (whereas VFX companies and small businesses would transition to 3990x once the software catches up).
  • MattZN - Friday, February 7, 2020 - link

    Shady prices? Buying a system with similar capabilities just 2 years ago would have cost me $40,000.

    -Matt
  • Spunjji - Monday, February 10, 2020 - link

    dwade123 is a Trump supporter; logic has no place in its world view.
  • Korguz - Saturday, February 8, 2020 - link

    ahh so how about those that were defending intels prices before zen came out ?? going by what you said, those cpus were also overpriced ??? fact is, most couldnt afford the prices intel was charging for its higher end chips... its funny how it seems when intel does something its ok.. but when amd does the same, its wrong....
  • MattZN - Friday, February 7, 2020 - link

    I think you are taking a rather expansive license with your use of "most". A better way to think about it is... who actually *needs* a system this big? I would guess that most of the people you are thinking about don't actually need a 3990X system for what they do. Not really.

    Not to mention that actually utilizing a 64-core/128-thread CPU fully would also require commensurate amounts of ram. For our needs, which are mostly bulk-compiles, 2GB/thread is required which is $1400 worth of memory just by itself (for 256GB worth of EUDIMMs).

    $4000 + $1400 + storage... yah, it adds up. At that point nobody is going to be crying over a $500 motherboard.

    -Matt
  • Spunjji - Monday, February 10, 2020 - link

    Ableist slurs scattered amidst a nonsensical rant?
    Check.

    Strawmanning anyone who disagrees?
    Check.

    Referring to anyone not buying top-of-the-range gear as a "peasant"?
    Check.

    Oh boy, it's a troll! Hooray! :|
  • jmunjr - Friday, February 7, 2020 - link

    Wow I saw benchmarks of the 3990X vs 2 x Xeon Platinum 8280 on Linux and what a beat down. The 3990X at worst matched the 2x8280 on a few tests and soundly beat on many others. Impressive!
  • Batty - Friday, February 7, 2020 - link

    Several people suggested compiling source code would be a good test of this chip's performance. I agree, I would recommend Unreal Engine 4.24.1 built clean from source, it is a vast codebase and scales very well with more cores. My Intel 6/12 core machine takes 73 minutes, for instance.
  • MattZN - Friday, February 7, 2020 - link

    My personal favorite is chromium (i.e. the source base for the chrome web browser). 30,000+ C++ files is a great test.

    -Matt
  • Rudde - Monday, February 10, 2020 - link

    Anantech used to benchmark Chromium compile. I can't recall why they stopped.
  • Ian Cutress - Friday, February 14, 2020 - link

    Something in Windows 1903/1909 broke our script. Due to events and travel I haven't had a week of downtime to sit down and fix it.
  • wizyy - Friday, February 7, 2020 - link

    "In The Midst Of Chaos, AMD Seeks Opportunity"

    Sounds like a title of one of the chapters of Romance of the Three Kingdoms (Novel by Luo Guanzhong, XIV century).
    Even if it isn't meant like so, it's appreciated :)
  • Ian Cutress - Friday, February 14, 2020 - link

    It's an edited Sun Tzu quote :)
  • 007ELmO - Friday, February 7, 2020 - link

    When you use the word "amortize" with anything but a mortgage.
  • deksman2 - Friday, February 7, 2020 - link

    You know, I actually find that this is quite intuitive review of the CPU which illustrates how badly Windows software is lagging behind hw.

    And while Linux may not be the go to choice for Enterprise users (although stability wise I do think Linux is pretty solid), I still think including it in this review would have been a good idea (as historically, it DOES have FAR superior support for multi-core CPU's above 32 cores than Windows).

    On that note, perhaps Enterprise users should consider going Linux and looking for open source software to replace their existing ones.
    Open source generally can get better upgrades/support for the simple reason its open.

    I've seen businesses using Linux (where commonly before they had Windows) and using exactly the same software for example.
    So, I don't think transitioning to Linux etc. would be a problem. In the short term with adjustment and all, yes perhaps, but in the long run, they will probably save money by using free/open source software. Even in the short term, companies could be running both Windows and Linux together to help with adjustment and training until they are ready to completely move to Linux.
  • eek2121 - Friday, February 7, 2020 - link

    Meanwhile, Linux unlocks the true potential of this beast: https://www.phoronix.com/scan.php?page=article&...
  • 111alan - Friday, February 7, 2020 - link

    Just say this thing basically beats their own dual-EPYC2 7702 config(CBR20 @28974 2S, 18795 1S ).
  • james4591 - Friday, February 7, 2020 - link

    The 3990X is not aimed at home or enthusiast users. It's aimed primarily at production studios and high end workstation for rendering, data processing, and encoding/decoding multimedia.

    With this, you don't even need gimmicks like QuickSync or NVencode to pass proprietary codecs into HD videos. You can do all the encoding in open format software codecs like h.264 and xvid and use SMT to process the video faster.

    Basically you could encode a 4K video in h.264, downscale it to 1080p@60Hz and have it done before you finish eating a sandwich. Roughly about 8 minutes give or take a few.

    Plus, with that many cores and even CPU grouping, you could assign Group scaling to different processes which would free up the CPU latency and allow more tasks to be shuffled into the stack without a performance penalty.
  • IanToo - Friday, February 7, 2020 - link

    >I’m proud to say that this price was my idea – AMD originally had it for something different

    What was the price, and when did you pitch the idea? None of your 3990x articles or tweets have this.
  • Ian Cutress - Saturday, February 8, 2020 - link

    I spoke about it on my twitter and on my CES 2hr livestream with Wendell. Ryan talked about it on twitter at the time of the announcement too.
  • msroadkill612 - Monday, February 10, 2020 - link

    It is just a pity you didn't make the variable "X", a cheaper currency than USD :)
  • 111alan - Friday, February 7, 2020 - link

    (and meanwhile 1x8280 also beats 2x8280 in several tests)
  • Ian Cutress - Saturday, February 8, 2020 - link

    The downsides of a NUMA environment with crosstalk.
  • FakThisShttyGame - Friday, February 7, 2020 - link

    Intel needs to get their shit together and be competitive again or else AMD will do the same Skylake 14nm+++++ BS to us in the future. We need competition
  • Makaveli - Friday, February 7, 2020 - link

    AMD doesn't have a big enough market share yet and money to do what intel has done the last few years slow down.
  • HellHammerThrash - Friday, February 7, 2020 - link

    I just hope this cpu will run both Zork and Pong well. Idk, maybe throw in a pair of Titan RTX's and enough RAM too....
  • 7beauties - Friday, February 7, 2020 - link

    Ian, why did you close with such a final thought? Your admitting to having made up the $3990 price tag as a joke makes me mistrust your reviews and thoughts.
  • Ian Cutress - Saturday, February 8, 2020 - link

    What? AMD briefed us before the CES keynote about the performance and their intended price. I said they should make it $3990. About 4am the next morning before the keynote, I got an email saying that they'd changed the SEP to $3990.
  • biodoc - Saturday, February 8, 2020 - link

    Linux unleashes the full power of this chip. Read the phoronix review.
    https://www.phoronix.com/scan.php?page=article&...
  • dickeywang - Saturday, February 8, 2020 - link

    It would've be nice if we could see some benchmark on a Linux box.
  • Ric1194 - Saturday, February 8, 2020 - link

    I think that the results are a bit misleading, power user are more about multitasking, where processor grouping is not that important, people who buy a threadripper-3990x will do a lot of things simultaneously like playing a game while downloading something and waiting for other things to finish,thus to represent a more realistic scenario it would be better to do tow or more simultaneous programs, like Photoshop and gaming while downloading something
  • GreenReaper - Saturday, February 8, 2020 - link

    I think if you're switching between a variety of tasks, and playing games, you might prefer the higher-frequency, lower-core options - and perhaps some judicious prioritization.

    You have limited power budget. Unless you *really* know you need that many cores, and ideally have seen someone do a benchmark of it beforehand, you probably don't.
  • Zingam - Sunday, February 9, 2020 - link

    I would setup several machines if I need to do different things at the same time. Buying a single threadripper to multitask - is more than just stupid - it is also expensive.
  • ballsystemlord - Saturday, February 8, 2020 - link

    Thanks for the article Ian and Gavin!
    I found no spelling or grammar errors!
  • Railgun - Saturday, February 8, 2020 - link

    So when will the growing backlog of benchmarks be posted into bench?
  • Ian Cutress - Friday, February 14, 2020 - link

    They should be in Bench. If not, drop me an email.
  • darealist - Saturday, February 8, 2020 - link

    $4000 to ripoff their loyal fanbase. All the shillz be liek "it's a steal!" while typing on their 1600x build ROFL.
  • levizx - Saturday, February 8, 2020 - link

    what a stupid dipshit
  • Spunjji - Monday, February 10, 2020 - link

    *facepalm*
    It *is* a steal - for a 64 core CPU.

    I can't afford one, I'd never have any use for one, and I don't think anyone who qualifies as AMD's "loyal fanbase" would either. It's basically an industrial tool - the people who need this will buy it based on that need.
  • StuntFriar - Saturday, February 8, 2020 - link

    While it's a little specific, it would be cool to benchmark some Unreal Engine 4 game developer workflows, such as doing a full rebuild/repackage of a game (for Windows, Android, iOS and consoles), rebaking the lighting for a level, importing assets, etc...

    I'm suggesting UE4 because Epic already has a bunch of freely available demo projects (some are graphical showcases, others are actual playable games that will pass certification on some consoles with a little work) so it's easy to set up a test that other folks can try at work for themselves - which would make it far easier to decide if a CPU upgrade would be worth it.

    For fun, you could even do the same tests on Windows, MacOS and Linux to see if there's a tangible difference between operating systems (though the vast majority of developers would be using Windows regardless).

    The UE4 Editor seems to be highly parallel in most of its building/compiling tasks and I do wonder they scale up proportionately past 16 cores.

    Probably worth doing some Unity Engine benchmarks too since that's the most popular engine on the planet. Haven't used it in over a year, but it seemed to favour higher single-threaded performance for a lot of the building and asset import tasks. But again, it's fairly easy to set up benchmarks that users can replicate at work.

    Cheers.
  • Betonmischer - Saturday, February 8, 2020 - link

    Hi Ian! I'd like to chime in on the difference between the Pro and the Enterprise versions of Windows 10 in regards to 128 thread management. Are you absolutely sure that your Pro test system is up to date? I see 2 sockets on the screenshot, which shouldn't happen on either version. Here's the picture of what it looks like on my colleague's test bench. It's Windows 10 Pro, and it's detecting a 128-thread CPU as a single socket. We found no impact on performance either, including the benchmarks that you specifically listed on page 3.

    https://imgur.com/G2VqgoU
  • realbabilu - Saturday, February 8, 2020 - link

    Since this is targeted to very segmented market like render farm. A bench single cpu tr4 3990x vs clustered cheaper several ryzen 3950x will be fascinating.
  • msroadkill612 - Monday, February 10, 2020 - link

    I sometimes fantasise about clusters/arrays of Renoir apuS, each w/ a 2TB NVME of "edge" data, for rendering and AI.

    What do folks think?
  • hammer256 - Sunday, February 9, 2020 - link

    Hm, I wonder if AMD would release a higher clocked EPYC 7702p variant for workstation use, raise the TDP to say 320W, have threadripper clocks, and sell it for $5-6K. For the 64 core use cases I can't imagine an extra $1-2K would matter for the target audience. For those people I imagine 8 channels of registered memory would matter a lot more for the bandwidth, ECC, and capacity, but still want the high clocks.
  • B3an - Sunday, February 9, 2020 - link

    In your Handbrake test, could the 3960X/3970X both be scoring lower than the 3950X because you're also using Windows 10 Pro? Why else would they be scoring lower considering that the 3990X scores quite significantly higher than all those CPU's when using Windows 10 Enterprise?
  • Betonmischer - Monday, February 10, 2020 - link

    Windows 10 Pro in this review's case is highly likely out of date. Otherwise it would present the 3990X as single-socket like Windows 10 Enterprise did.
  • msroadkill612 - Sunday, February 9, 2020 - link

    It has long puzzled me, that debate seems premised on unchangeable software, when of course few things are more readily changed.

    To say little software uses 128 threads is hardly surprising, when it is such a large increase to an unprecedented level.

    Even if the extra threads have limited current utility, surely they are nice reserve resources to have as a likely upgrade path.
  • nt300 - Sunday, February 9, 2020 - link

    You need to bring the Cores to market to push software developers into utilizing such horse power. Intel won't do that, because they rather overcharge for very little where as AMD has the upper hand and can add as many cores as possible. More Cores is where AMD can compete in, on top of providing a much better micro architecture.
  • msroadkill612 - Monday, February 10, 2020 - link

    my fault not making my point better - this is but one example of the mindset i refer to. "current benchmarks show x better than y, so buy x", even when y has far better fundamentals and its a very dynamic ecosystem.

    PCIE 4 gpu is a current maddening example. It is barely mentioned when comparing a multi year life span product (nvidia vs navi on x570) for an ecosystem clearly headed for exceeding current gpu cache levels.
  • dwade123 - Monday, February 10, 2020 - link

    64 cores of uselessness is only good for servers. There ain't gonna be time for future software to make use of 128 threads either because tomorrow's software will shift to the GPU for AI and superior performance.
  • msroadkill612 - Tuesday, February 11, 2020 - link

    You mean like "nobody needs more than 4 cores"?

    Never say never.
  • nt300 - Sunday, February 9, 2020 - link

    Once again AMD demonstrates aggressive innovation & technological advancements. Now that a set of ZEN engineers moves over onto the RTG, can't wait to see how well they fair with the RDNA2 enhancements.
  • Redstorm - Monday, February 10, 2020 - link

    It blows me away as a technologist that you tested this on Windoz, Unlock the potential with a true performance OS like Linux
  • 29a - Friday, February 14, 2020 - link

    It blows me away that you consider yourself an expert but use the term Windoze. I bet you use M$ too.
  • msroadkill612 - Monday, February 10, 2020 - link

    Intuitively, turning off SMT seems an attractive option for many - for now anyway. For an extra $2k, you turn 64 threads into 64 cores, avoid some software issues & presumably get better utility from expensive memory.
  • Silma - Monday, February 10, 2020 - link

    Can we still categorize a processor purchase as "Enthusiast" when it costs $3,990 ?
    Especially when the only reason to purchase it is 3D rendering?

    I don't think so. We need a new category and it's probably "Pro 3D renderer".
  • Pessimism - Monday, February 10, 2020 - link

    Can it run Crysis?
  • Mugur - Tuesday, February 11, 2020 - link

    Yes, see LTT video. In software.
  • XiZeL - Tuesday, February 11, 2020 - link

    Doesnt the Ryzen 9 3950X have 64Mb L3 cache? the table states 32Mb
  • chrkv - Tuesday, February 11, 2020 - link

    What Windows version were using? I see claims that since version 18362.535 Windows 10 shows 1 socket for 3990X - look for "18362.535" here https://translate.google.com/translate?hl=&sl=...
  • Betonmischer - Tuesday, February 11, 2020 - link

    That's right. Here's a proof that it does:

    https://imgur.com/G2VqgoU
  • 29a - Friday, February 14, 2020 - link

    AT can't be bothered by the little stuff like OS patches when they're doing an AMD review. Haven't you seen any of their AMD launch reviews, they screw every one of those up.
  • TokyoQuaSar - Wednesday, February 12, 2020 - link

    Very interesting article, I hope you can update it with data from an Epyc 77xx (7702 or 7742). Would be nice to have a head to head comparison, if possible a test with equal frequencies and some tests on software that are very dependant on memory bandwidth, to see the influence of the 8 channels aside from the amount of memory.
  • vivs26 - Wednesday, February 12, 2020 - link

    Are there any linux distros for desktop that support more than 64 cores?
  • TokyoQuaSar - Thursday, February 13, 2020 - link

    Not sure exactly but this test was done on Ubuntu and they don't mention any problem coming from the OS but rather from the tested software:
    https://techgage.com/article/amd-ryzen-threadrippe...
    They do say the number of cores scale better on Linux.
  • HikariWS - Thursday, February 13, 2020 - link

    Very nice article! I've finally seen use cases where high core count counts!

    Indeed you should start adding some Lix benchs, I wonder how the kernel itself would handle that many cores. And of course M$ has to fix at least Pro Workstation.

    I'd rly like to see a review comparing HT enabled and disabled, around 8C. Is it worth disabling or enabling HT on my 9900KS? Under full load, is there difference in performance and consumption?

    How much performance the virtual cores have over physical ones? Do work load on one type affect the other? If we force affinity on one and leave its pair idle, and then put a full work load on it, how the tested core performs?
  • HikariWS - Thursday, February 13, 2020 - link

    Still, I'm worried with AMD.

    Increase clock has been much harder than increase core count. AMD is very aggressive on core count, yes, but has been struggling on clock.

    9900KS is Intel's top notch on this regard. I can assure from personal tests how awesome it is. It idles @ 45º in a Noctua D15S. With Prime95, goes to 80º and holds 5GHz All Core for a few minutes before dropping to 4GHz and holds that undefinitely.

    In real world use, specially gaming and 4K playback, it's able to hold 5GHz undefinitely, I haven't seen its Turbo juice depleat not even once! For anybody who doesn't need more than 8C/16T and benefits more from serial processing, it's the best of the best, and I doubt Comet Lake will bring a competitor to it.

    Intel has been increasing cores in response to Intel, and with exceptions they have been winning in overall performance against AMD CPUs with more core count.

    In the future years we'll face algorithms struggle to scale in parallelism. Most softwares don't benefit from more than 4 or 8 threads, and be allocated to a virtual HT core just reduces opportunity to perform better. When we reach software optimization limits, increasing core count won't benefit users anymore, and we'll face increased demand for serial power.

    Then we go for microarchitecture. AMD are on their brand new one, while litography issues is holding Intel from widely distribute their Sunny Cove, and they are close to finishing their Willow Cove. When Intel finish their 7nm, they will have 2 more powerful microarchitectures to bring to desktop and server market, while AMD is working on their future one.

    Summing that up, I believe in a few years Intel will have consistent performance growth over their generations, while AMD will start struggling.
  • kuraegomon - Tuesday, February 18, 2020 - link

    Oh dear. Intel shill confirmed. What makes me so confident? "Most softwares don't benefit from more than 4 or 8 threads" - anyone who makes that statement in 2020 with the implication that it's a forward-looking statement is clearly being disingenuous.
  • Logic28 - Monday, May 11, 2020 - link

    This statement...

    Increase clock has been much harder than increase core count. AMD is very aggressive on core count, yes, but has been struggling on clock.

    Frankly is flat out wrong. Yea, a year and half ago you would be fine to say this. But along the entire consumer and pro-sumer line up, AMD destroys Intel, and the Ryzen 3950x has destroyed the single thread count speeds across the entire internet, except I guess in some fan boy universe where they still want to bow down and befriend the Goliath even when it is clearly getting beaten badly by David.

    Look at the actually stats, at each price point AMD cpus are beating intel's at single core, multicore, benchmarks on games, video editing, rendering, bloody compiling, they just are.

    So your statement is flat out a fabrication...
  • clsmithj - Thursday, February 13, 2020 - link

    Should added Linux to the benchmark graph comparison
  • alysdexia - Monday, May 4, 2020 - link

    Stop sayan performance when you mean speed.
    won't -> shan't
  • alysdexia - Monday, May 4, 2020 - link

    128 cores -> 128 threads
  • alysdexia - Monday, May 4, 2020 - link

    data has -> datum has
  • alysdexia - Monday, May 4, 2020 - link

    balance -> proportion; fast:free -> swift:slow; will -> shall; issues -> problems; shouldn't -> ouhtn't; more cores -> feler cores
  • AMDsucksFor3Drendering - Thursday, December 31, 2020 - link

    OMG amd and microsoft are hurting 3d users who bought this useless procesor. I have two 3990x procesor trying to work with 3ds max and vray and I cannot use the whole proccesor. Where is the solution to this problem?

Log in

Don't have an account? Sign up now