Comments Locked

171 Comments

Back to Article

  • ibnmadhi - Monday, August 13, 2018 - link

    It's over, Intel is finished.
  • milkod2001 - Monday, August 13, 2018 - link

    Unfortunately not even close. Intel was dominating for last decade or so. Now when AMD is back in game, many will consider AMD but most will still get Intel instead. Damage was done.It took forever to AMD to recover from being useless and will take at least 5 years till it will get some serious market share. Better late than never though...
  • tipoo - Monday, August 13, 2018 - link

    It's not imminent, but Intel sure seems set for a gradual decline. It's hard to eke out IPC wins these days so it'll be hard to shake AMD off per-core, they no longer have a massive process lead to lead on core count with their margins either, and ARM is also chipping away at the bottom.

    Intel will probably be a vampire that lives another hundred years, but it'll go from the 900lb gorilla to one on a decent diet.
  • ACE76 - Monday, August 13, 2018 - link

    AMD retail sales are equal to Intel now...and they are starting to make a noticeable dent in the server market as well...it won't take 5 years for them to be on top...if Ryzen 2 delivers a 25% increase in performance, they will topple Intel in 2019/2020
  • HStewart - Monday, August 13, 2018 - link

    "AMD retail sales are equal to Intel now"

    Desktop maybe - but that is minimal market.
  • monglerbongler - Monday, August 13, 2018 - link

    Pretty much this.

    No one really cares about workstation/prosumer/gaming PC market. Its almost certainly the smallest measurable segment of the industry.

    As far as these companies' business models are concerned:

    Data center/server/cluster > OEM consumer (dell, hp, microsoft, apple, asus, toshiba, etc.) > random categories like industrial or compact PCs used in hospitals and places like that > Workstation/prosumer/gaming

    AMD's entire strategy is to desperately push as hard as they can into the bulwark of Intel's cloud/server/data center dominance.

    Though, to be completely honest, for that segment they really only offer pure core count and PCIe as benefits. Sure they have lots of memory channels, but server/data center and cluster are already moving toward the future of storage/memory fusion (eg Optane), so that entire traditional design may start to change radically soon.

    All important: Performance per unit of area inside of a box, and performance per watt? Not the greatest.

    That is exceptionally important for small companies that buy cooling from the power grid (air conditioning). If you are a big company in Washington and buy your cooling via river water, you might have to invest in upgrades to your cooling system.

    Beyond all that the Epyc chips are so freaking massive that they can literally restrict the ability to design 2 slot server configuration motherboards that also have to house additional compute hardware (eg GPGPU or FPGA boards). I laugh at the prospect of a 4 slot epyc motherboard. The thing will be the size of a goddamn desk. Literally a "desktop" sized motherboard.

    If you cant figure it out, its obvious:

    Everything except for the last category involves massive years-spanning contracts for massive orders of hundreds of thousands or millions of individual components.

    You can't bet hundreds of millions or billions in R&D, plus the years-spanning billion dollar contracts with Global Foundries (AMD) or the tooling required to upgrade and maintain equipment (Intel) on the vagaries of consumers, small businesses that make workstations to order, that small fraction of people who buy workstations from OEMs, etc.

    Even if you go to a place like Pixar studios or a game developer, most of the actual physical computers inside are regular, bone standard, consumer-level hardware PCs, not workstation level equipment. There certainly ARE workstations, but they are a minority of the capital equipment inside such places.

    Ultimately that is why, despite all the press, despite sending out expensive test samples to Anandtech, despite flashy powerpoint presentations given by arbitrary VPs of engineering or CEOs, all of the workstation/Prosumer/gaming stuff is just low-binned server equipment.

    because those are really the only 2 categories of products they make;

    pure consumer, pure workstation. Everything else is just partially enabled/disabled variations on those 2 flavors.
  • Icehawk - Monday, August 13, 2018 - link

    I was looking at some new boxes for work and our main vendors offer little if anything AMD either for server roles or desktop. Even if they did it's an uphill battle to push a "2nd tier" vendor (AMD is not but are perceived that way by some) to management.
  • PixyMisa - Tuesday, August 14, 2018 - link

    There aren't any 4-socket EPYC servers because the interconnect only allows for two sockets. The fact that it might be difficult to build such servers is irrelevant because it's impossible.
  • leexgx - Thursday, August 16, 2018 - link

    is more then 2 sockets needed when you have so many cores to play with
  • Relic74 - Wednesday, August 29, 2018 - link

    Actually there are, kind of, supermicro for example has created a 4 node server for the Epyc. Basically it's 4 computers in one server case but the performance is equal to that if not better than that of a hardware 4 socket server. Cool stuff, you should check it out. In fact, I think this is the way of the future and multi socket systems are on their way out as this solution provides more control over what CPU. As well as what the individual cores are doing and provides better power management as you can shut down individual nodes or put them in stand by where as server with 4 sockets/CPU's is basically always on.

    There is a really great white paper on the subject that came out of AMD, where the stated that they looked into creating a 4 socket CPU and motherboard capable of handling all of the PCI lanes needed, however it didn't make any sense for them to do so as there weren't any performance gains over the node solution.

    In fact I believe we will see a resurrection of blade systems using AMD CPU's, especially now with all of the improvements that have been made with multi node cluster computing over the last few years.
  • Eastman - Tuesday, August 14, 2018 - link

    Just a comment regarding studios and game developers. I work in the industry and 90% of these facilities do run with Xeon workstations and ECC memory. Either custom built or purchased from the likes of Dell or HP. So yes, there is a market place for workstations. No serious pro would do work on a mobile tablet or phone where there is a huge market growth. There is definitely a place for a single 32 core CPUs. But among say 100 workstations there might be a place for only 4-5 of the 2990WX. Those would serve particles/fluids dynamics simulation. Most of the workload would be sent to render farms sometimes offsite. Those render farms could use Epyc/Xeon chips. If I was a head of technology, I would seriously consider these CPUs for my artists workflow.
  • ATC9001 - Wednesday, August 15, 2018 - link

    Another big thing which people don't consider is...the true "price" of these systems is nearly neck and neck. Sure you can save a couple hundred with AMD CPU, but by the time you add in RAM, mobo, PSU, storage etc....you're talking a 5k+...

    Intel doesn't want AMD to go away (think anti-trust) but they are definitely stepping up efforts which is great for consumers!
  • LsRamAir - Thursday, August 16, 2018 - link

    We've been patient! Looked at all the ads multiple times for support to. Please drop the rest of the knowledge, Sir! "Still writing" on the overclocking page is nibblin' at my patience and intrigue hemisphere.
  • Relic74 - Wednesday, August 29, 2018 - link

    Yes of course there is, I have one of the new 32 core systems and I use it with SmartOS. A VM management OS that could allow up to 8 game developers to use a single 32 Core workstation without a single bit of performance lost. That is as long as each VM has control over their own GPU. 4 Cores(most games dont new more than that in fact, no game needs more that), 32GB to 64GB of RAM (depending on server config) and an Nvidia 1080ti or higher, per VM. That is more than enough and would save the company thousands, in fact, that is exactly what most game developers use. Servers with 8 to 12 GPU's, dual CPUs, 32 to 64 cores, 512GB of RAM, standard config.

    You should watch Linus Tech Tips 12 node gaming system off of a single computer, it's the future and is amazing.
  • eek2121 - Saturday, August 18, 2018 - link

    You are downplaying the gaming market. It's a multi-billion dollar industry. Nothing niche about it.
  • HStewart - Monday, August 13, 2018 - link

    I agree with you - so this mainly concerning "It's over, Intel is finished"

    Normally I don't care much to discuss AMD related threads - but when people already bad mouth Intel, it all fair game in my opinion.

    But what is important and why I agree is that it not even close. Because the like it or not, PC Game industry which primary reason for desktop now is a minimal part of industry now - computers are mostly going to mobile - and just go into local BestBuy and you see why it not even close.

    Plus as in a famous WW II saying, "The Sleeper has been Awaken". One is got to be blind, if you think "Intel is finished" I think the real reason that 10nm is not coming out, is that Intel wants to shut down AMD for once and for always. I see this coming in two areas - in the CPU area and also with GPU - I believe the i870xG is precursor to it - with AMD GPU being replace with Artic Sound.

    But AMD does have a good side to this. That it keep Intel's prices down and Intel improving products.
  • ishould - Monday, August 13, 2018 - link

    "I think the real reason that 10nm is not coming out, is that Intel wants to shut down AMD for once and for always." This is actually not true, Intel is having *major* yield issues with 10nm, hence 14nm being a 4-year-node (possibly 5 years if it slips from the expected Holiday 2019), and is a contributing factor for the decline of Intel/rise of AMD.
  • HStewart - Monday, August 13, 2018 - link

    I not stating that Intel didn't have yield issues - but there is 2 things that should be taking in account - and of course Intel only really knows

    1. (Intel has stated this) That all 10nm are not equal - and then Intel's 10nm is closer to competition's 7nm - and this is likely the reason why it taking long.

    2. Intel realizes the process issues - and if you think they are not aware of competition in market - not just AMD but also ARM then one is a fool
  • ishould - Monday, August 13, 2018 - link

    I agree they were probably being too ambitious with their scaling (2.4x) for 10nm. Rumor is that they've had to sacrifice some scaling to get better yields. EUV cannot come soon enough!
  • MonkeyPaw - Monday, August 13, 2018 - link

    I highly highly doubt that Intel would postpone 10nm just to “shut down AMD.” Intel has shareholders to look out for, and Intel needs 10nm out the door yesterday. Their 10nm struggles are real, and it is costing them investor confidence. No way would they wait around to win a pissing match with AMD while their stock value goes down.
  • HStewart - Monday, August 13, 2018 - link

    "I highly highly doubt that Intel would postpone 10nm just to “shut down AMD""

    Probably right - AMD is not that big of threat in the real world - just go in to BestBuy - yes they have some game machines. a very few laptops including older generations
  • Spunjji - Tuesday, August 14, 2018 - link

    That is some impressive goalpost moving that you just did *on your own claim*.

    Intel's issues have nothing to do with AMD, but they will allow a resurgent AMD to become more competitive over time. Pointing to how little of a threat AMD are *right now* and/or making up weird conspiracy theories that place Intel as the only mover and shaker in the entire industry won't change that.
  • Relic74 - Wednesday, August 29, 2018 - link

    Consumer based computers is but a small portion of the market. Servers, millions of them needed every year to fill the demand needed by, well, everyone who hosts a site, government, networking farms a mile long, etc. The server market is huge and is growing almost faster than tech companies can provide. It's why I always thought Apple getting out if the server market was kind of a stupid ideal. All of the servers they ever created were sold before they were even created. I guess the margains were to small for them, greedy bastards. Why only make double the profits when you make 5x with consumer products. Seriously, an iPhone X costs less than $200 to make now, it used to be $250 but now its $200, greedy bastards. Oh, did you know it costs Apple less than $3 to go from 64GB to 128GB, ugh.
  • Ozymankos - Sunday, January 27, 2019 - link

    it matters what you consider as costs
    do you calculate the shipping costs,the marketing costs,the salaries of everyone involved,the making of new facilities?
  • Eastman - Tuesday, August 14, 2018 - link

    Intel isn't finished. They are still king of single thread performance. We will see if Zen 2 will surpass Intel's single thread performance.
  • seanlivingstone - Monday, August 13, 2018 - link

    Do you know that Jensen Huang is Lisa Su's uncle? Intel is done.
  • f1nalpr1m3 - Thursday, October 25, 2018 - link

    Expected Results vs Actual:
    Stats Expected Q3 2018 Results Actual Q3 2018 Results
    Revenue($B) $18.1 $19.2
    EPS $1.15 $1.40
  • UnNameless - Tuesday, August 14, 2018 - link

    Sadly this is true. AMD tries hard and in the most part succeeds. Intel frankly showed some kind of panic for the niche market of top end processors with that chilled fiasco of a 5 GHz CPU. This means AMD puts quite some pressure onto them
  • Outlander_04 - Tuesday, August 14, 2018 - link

    AMD have bounced back very quickly . Mostly because people are starting to accept how over priced intel have been
    https://wccftech.com/intel-coffee-lake-amd-ryzen-c...
  • twtech - Wednesday, August 15, 2018 - link

    I don't think branding issues is going to stop purchases of AMD chips when they are the best fit for a particular use-case, but the lack of direct access to memory for half of the cores in the 2990wx is going to keep it from being the knockout punch for HEDT that it could have been.

    Looking at these benchmark results, that has seriously gimped the performance of the 32-core TR to the point where it is slower than the 16 core in some threaded workloads.

    Sure, you can just go ahead and buy the 16-core 2950x instead, but then you're reduced back to being in 7980xe territory - albeit at a cheaper price point - but the point is, it's not the clear win that a relatively high clocked 32-core CPU probably could have been without the memory access issue.
  • edzieba - Monday, August 13, 2018 - link

    Not really. In chasing Moar Cores you only excel in embarrassingly parallel workloads. And embarrassingly parallel workloads are in GPGPU's house. And GPU lives in GPGPU's house.
  • boeush - Monday, August 13, 2018 - link

    Try to run multiple VMs/Containers and/or multiple desktop sessions on a GPGPU: you might find out that GPGPU's house isn't all it's cracked up to be...
  • SonicKrunch - Monday, August 13, 2018 - link

    Look at that power consumption. I'm not suggesting AMD didn't create a really great CPU here, but they really need to work on their efficiency. It's always been their problem, and it's not seemingly going away. The market for these near 2k chips is also not huge in comparison to normal desktop space. Intel has plenty of time to answer here with their known efficiency.
  • The_Assimilator - Monday, August 13, 2018 - link

    Yeah... look at the number of cores, numpty.
  • somejerkwad - Monday, August 13, 2018 - link

    The same efficiency that has consumer-grade products operating on more electricity in per-core and per-clock comparisons? Overclocking power gets really silly on Intel's high end offerings too, if you care to look at the numbers people are getting with an i9 that has fewer cores.
  • eddman - Monday, August 13, 2018 - link

    Interesting, can you post a link, please? I've read a few reviews here and there and when comparing 2600x to 8700k (which is more or less fair), it seems in most cases 8700k consumes less energy, even though it has higher boost clocks.
  • CrazyElf - Monday, August 13, 2018 - link

    The 8700k is not the problem. It is Skylake X.

    https://www.tomshardware.com/reviews/-intel-skylak...

    Power consumption when you OC X299 scales up quickly. Threadripper is not an 8700k competitor. It is an X299 competitor. The 32 core AMD is clearly priced to compete against the 7980X, unless Intel cuts the price.
  • eddman - Tuesday, August 14, 2018 - link

    I should've made it clear. I was replying to the "more electricity in per-core and per-clock" part. Also, he wrote consumer-grade, which is not HEDT. I do know that TR competes with SKL-X.

    Comparing OCing power consumption is rather pointless when one chip is able to clock much higher.

    Even when comparing 2950 to 7980, there are a lot of instances where 7980 consumes about the same power or even less. I don't see how ryzen is more efficient.
  • alpha754293 - Monday, August 13, 2018 - link

    @ibnmadhi
    "It's over, Intel is finished."

    Hardly.

    For example, the Threadripper 2990WX (32C, 3.0 GHz) gets the highest score in POV-Ray 3.7.1 benchmark, but when you compute the efficiency, it's actually the worst for it.

    It consumes more power and only gets about 114 points per (base clock * # of cores - which is a way to roughly estimate the CPU's total processing capability).

    By comparison, the Intel Core i9-7980XE (18C, 2.6 GHz) is actually the MOST EFFICIENT at 168 points per (base clock * # of cores). It consumes less power than the Threadripper processors, but it does also cost more.

    If I can get a system that can do as much or more for less, both in terms of capital cost and running cost (i.e. total cost of ownership), then why would I want to go AMD?

    I use to run all AMD when it was a better value proposition and when Intel's power profile was much worse than AMD's. Now, it has completely flipped around.

    Keep also in mind, that they kept the Epyc 7601 processor in here for comparison, a processor that costs $4200 each.

    At that price, I know that I can get an Intel Xeon processor, with about the same core count and base clock speed for about the same price, but I also know that it will outperform the Epyc 7601 as well when you look at the data.

    As of August, 2018, Intel has a commanding 79.4% market share compared to AMD's 20.6%. That's FARRR from Intel being over.
  • ender8282 - Monday, August 13, 2018 - link

    base clock * number of cores seems like a poor stand in for performance per watt. If we assume that IPC and other factors like mem/cache latency are the same then sure base clock * num cores effectively gives us performance unit of power but we know those are not constant.
  • plonk420 - Tuesday, August 14, 2018 - link

    worse for efficiency?

    https://techreport.com/r.x/2018_08_13_AMD_s_Ryzen_...
  • Railgun - Monday, August 13, 2018 - link

    How can you tell? The article isn’t even finished.
  • mapesdhs - Monday, August 13, 2018 - link

    People will argue a lot here about performance per watt and suchlike, but in the real world the cost of the software and the annual license renewal is often far more than the base hw cost, resulting in a long term TCO that dwarfs any differences in some CPU cost. I'm referring here to the kind of user that would find the 32c option relevant.

    Also missing from the article is the notion of being able to run multiple medium scale tasks on the same system, eg. 3 or 4 tasks each of which is using 8 to 10 cores. This is quite common practice. An article can only test so much though, at this level of hw the number of different parameters to consider can be very large.

    Most people on tech forums of this kind will default to tasks like 3D rendering and video conversion when thinking about compute loads that can use a lot of cores, but those are very different to QCD, FEA and dozens of other tasks in research and data crunching. Some will match the arch AMD is using, others won't; some could be tweaked to run better, others will be fine with 6 to 10 cores and just run 4 instances testing different things. It varies.

    Talking to an admin at COSMOS years ago, I was told that even coders with seemingly unlimited cores to play with found it quite hard to scale relevant code beyond about 512 cores, so instead for the sort of work they were doing, the centre would run multilple simulations at the same time, which on the hw platform in question worked very nicely indeed (1856 cores of the SandyBridge-EP era, 14.5TB of globally shared memory, used primarily for research in cosmology, astrophysics and particle physics; squish it all into a laptop and I'm sure Sheldon would be happy. :D) That was back in 2012, but the same concepts apply today.

    For TR2, the tricky part is getting the OS to play nice, along with the BIOS, and optimised sw. It'll be interesting to see how 2990WX performance evolves over time as BIOS updates come out and AMD gets feedback on how best to exploit the design, new optimisations from sw vendors (activate TR2 mode!) and so on.

    SGI dealt with a lot of these same issues when evolving its Origin design 20 years ago. For some tasks it absolutely obliterated the competition (eg. weather modelling and QCD), while for others in an unoptimised state it was terrible (animation rendering, not something that needs shared memory, but ILM wrote custom sw to reuse bits of a frame already calculated for future frame, the data able to fly between CPUs very fast, increasing throughput by 80% and making the 32-CPU systems very competitive, but in the long run it was easier to brute force on x86 and save the coder salary costs).

    There are so many different tasks in the professional space, the variety is vast. It's too easy to think cores are all that matter, but sometimes having oodles of RAM is more important, or massive I/O (defense imaging, medical and GIS are good examples).

    I'm just delighted to see this kind of tech finally filter down to the prosumer/consumer, but alas much of the nuance will be lost, and sadly some will undoubtedly buy based on the marketing, as opposed to the golden rule of any tech at this level: ignore the publish benchmarks, the ony test that actually matters is your specific intended task and data, so try and test it with that before making a purchasing decision.

    Ian.
  • AbRASiON - Monday, August 13, 2018 - link

    Really? I can't tell if posts like these are facetious or kidding or what?

    I want AMD to compete so badly long term for all of us, but Intel have such immense resources, such huge infrastructure, they have ties to so many big business for high end server solutions. They have the bottom end of the low power market sealed up.

    Even if their 10nm is delayed another 3 years, AMD will only just begin to start to really make a genuine long term dent in Intel.

    I'd love to see us at a 50/50 situation here, heck I'd be happy with a 25/75 situation. As it stands, Intel isn't finished, not even close.
  • imaheadcase - Monday, August 13, 2018 - link

    Are you looking at same benchmarks as everyone else? I mean AMD ass was handed to it in Encoding tests and even went neck to neck against some 6c intel products. If AMD got one of these out every 6 months with better improvements sure, but they never do.
  • imaheadcase - Monday, August 13, 2018 - link

    Especially when you consider they are using double the core count to get the numbers they do have, its not very efficient way to get better performance.
  • crotach - Tuesday, August 14, 2018 - link

    It's happened before. AMD trashes Intel. Intel takes it on the chin. AMD leads for 1-2 years and celebrates. Then Intel releases a new platform and AMD plays catch-up for 10 years and tries hard not to go bankrupt.

    I dearly hope they've learned a lesson the last time, but I have my doubts. I will support them and my next machine will be AMD, which makes perfect sense, but I won't be investing heavily in the platform, so no X399 for me.
  • boozed - Tuesday, August 14, 2018 - link

    We're talking about CPUs that cost more than most complete PCs. Willy-waving aside, they are irrelevant to the market.
  • Ian Cutress - Monday, August 13, 2018 - link

    Hey everyone, sorry for leaving a few pages blank right now. Jet lag hit me hard over the weekend from Flash Memory Summit. Will be filling in the blanks and the analysis throughout today.

    But here's what there is to look forward to:

    - Our new test suite
    - Analysis of Overclocking Results at 4G
    - Direct Comparison to EPYC
    - Me being an idiot and leaving the plastic cover on my cooler, but it completed a set of benchmarks. I pick through the data to see if it was as bad as I expected

    The benchmark data should now be in Bench, under the CPU 2019 section, as our new suite will go into next year as well.

    Thoughts and commentary welcome!
  • Tamz_msc - Monday, August 13, 2018 - link

    Are the numbers for test LuxMark C++ test correct? Seems they've been swapped(2900WX and 2950X).
  • Ian Cutress - Monday, August 13, 2018 - link

    It looks like the 2950X are reversed (C++ should be OpenCL), but I checked the raw data and that's what came out of the benchmark. I need to put the 2950X back on to test, I'll do it in a bit
  • Stuka87 - Monday, August 13, 2018 - link

    Thanks for getting this up Ian! An awesome read per usual :)
  • deathBOB - Monday, August 13, 2018 - link

    The interconnect analysis was very interesting, glad you spent time on that.
  • mapesdhs - Monday, August 13, 2018 - link

    Yes, that was good. I had flashbacks to reading SGI Origin technical reports 20 years ago. :D

    http://www.sgidepot.co.uk/origin/isca.pdf
    http://www.sgidepot.co.uk/origin/hypercube.pdf

    Index: http://www.sgidepot.co.uk/origin/

    I see a great many similarities, though the emphasis is different (SGI was all about bandwidth rather than latency, for extreme I/O and huge datasets in shared memory, though they greatly improved the latency behaviour with the 2nd-gen design). Fascinating to see many of the same issues play out in the consumer space, but for rather different tasks, though I bet a lot of researchers in industry and academia will be taking keen interest in what AMD has released.
  • close - Monday, August 13, 2018 - link

    "They will enable four cores per complex (8+8+8+8) and three cores per complex (6+6+6+6)"

    3/4 cores per complex or 6/8 cores?
  • MrSpadge - Monday, August 13, 2018 - link

    The 8 cores per die are distributed over 2 CCX core complexes with 4 cores each, as in Ryzen 1.
  • FreckledTrout - Monday, August 13, 2018 - link

    LOL You actually ran tests with the plastic on? That is just funny. Did the plastic melt?
  • Ian Cutress - Monday, August 13, 2018 - link

    It ran fine, though the numbers suggest the thermals reduced PB2/XFR2 turbo by a fair bit. Some tests look a bit down. Still writing it up :)
  • FreckledTrout - Tuesday, August 14, 2018 - link

    Hilarious. That does sound like something I would do in a hurry. I see you have a whole section awaiting for plastic vs no plastic thermals. I bet that will be an Anandtech only talking point. :)
  • msroadkill612 - Thursday, August 16, 2018 - link

    A coredom?
  • just4U - Monday, August 13, 2018 - link

    Ian, were you testing this with the CM Wraith Cooler? If not is it something you plan to review?
  • Ian Cutress - Monday, August 13, 2018 - link

    Most of the testing data is with the Liqtech 240 liquid cooler, rated at 500W. I do have data taken with the Wraith Ripper, and I'll be putting some of that data out when this is wrapped up.
  • IGTrading - Monday, August 13, 2018 - link

    To be honest, with the top of the line 32core model, it is interesting to identify as many positive effect cases as possible, to see if that entire set of applications that truly benefit of the added cores will persuade power users to purchase it.

    Like you've said, it is a niche of a niche and seeing it be X% faster of Y% slower is not as interesting as seeing what it can actually do when it is used efficiently and if this this makes a compelling argument for power users.
  • PixyMisa - Tuesday, August 14, 2018 - link

    Phoronix found that a few tests ran much faster on Linux - for 7zip compression in particular, 140% faster (as in, 2.4x). Some of these benchmarks could improve a lot with some tweaking to the Windows scheduler.
  • phoenix_rizzen - Wednesday, August 15, 2018 - link

    It'd be interesting to redo these tests on a monthly basis after Windows/BIOS updates are done, to see how performance changes over time as the Windows side of things is tweaked to support the new NUMA setup for TR2.

    At the very least, a follow-up benchmark run in 6 months would be nice.
  • Kevin G - Monday, August 13, 2018 - link

    Chiplets!

    The power consumption figures are interesting but TR does have to manage one thing that the high end desktop chips from Intel don't: off-die traffic. The amount of power to move data off die is significantly higher than moving it around on-die. Even in that context, TR's energy consumption for just the fabric seems high. When only threads are loaded, they should only be with dies with the memory controllers leaving two dies idle. It doesn't appear that the fabric is powering down while those remote dies are also powering down. Any means of watching cores enter/exit sleep states in real time?

    I'd also be fun to see with Windows Server what happens when all the cores on a die are unplugged from the system. Consdiering the AMD puts the home agent on the memory controller on each die, even without cores or memory attached, chances are that the home agent is still alive consuming power. It'd be interesting to see what happens on Sky Lake-SP as well if the home agents on the grid eventually power themselves down when there is nothing directly connected to them. It'd be worth comparing to the power consumption when a core is disabled in BIOS/EFI.

    I also feel that this would be a good introduction for what is coming down the road with server chips and may reach the high end consumer products: chiplets. This would permit the removal of the off-die Infinity Links for something that is effectively on-die throughout the cluster of dies. That alone will save AMD several watts. The other thing about chiplets is that it would greatly simplify Thread Ripper: only two memory controller chiplets would be to be in the package vs. four as we have now. That should save AMD lots of power. (And for those reading this comment, yes, Intel has chiplet plans as well.). The other thing AMD could do is address how their cache coherency protocols work. AMD has hinted at some caching changes for Zen 2 but lacks specificity.
  • gagegfg - Monday, August 13, 2018 - link

    do not seem to exist more than once the 16 additional core of the 2990wx compared to the 2950x
  • Ian Cutress - Monday, August 13, 2018 - link

    https://www.anandtech.com/bench/product/2133?vs=21...
  • Chaitanya - Monday, August 13, 2018 - link

    Built for scientific workload.
  • woozle341 - Monday, August 13, 2018 - link

    Do you think the lack of AVX512 is an issue? I might build a workstation soon for data processing with R and Python for some Fortran models and post-processing. Skylake-X looks pretty good wit its quad memory channels despite its high price.
  • MrSpadge - Monday, August 13, 2018 - link

    I don't think AVX512 is going to matter much anytime soon. However, The 8 memory channels of EPYC could matter a lot for HPC.
  • ElFenix - Monday, August 13, 2018 - link

    You guys need a 4k or maybe even 5k workload for transcoding - it's thread limited at 1080p so it becomes IPC and turbo limited. With x265 you can load up multiple 1080p handbrake instances on these high core count processors and they don't break a sweat.
  • ElFenix - Monday, August 13, 2018 - link

    That should be *1080p spawns limited numbers of threads*
  • T1beriu - Monday, August 13, 2018 - link

    >Europe is famed for its lack of air conditioning everywhere

    UK is a lot colder generally in the summer compared to the rest of Europe. I wouldn't generalize the lack of AC for the rest of Europe. AC is pretty common in my country.
  • jospoortvliet - Saturday, August 18, 2018 - link

    Missing everywhere here in Germany... though after this insanely hot summer i bet that that will begin to change...
  • powerincarnate - Monday, August 13, 2018 - link

    I didn't see a lot of gaming benchmarks, which I guess I understand, since these are more workstation cpus. It would be good to have seen both though to get a better idea of the overall qualities of the cpu as a multipurpose care.

    It seems from tomshardware benches that 7980xe, especially when overclocked, is best overall. AMD 2990wx obviously winning on the pure multi-threaded workstation stuff as long as it is not memory intensive.

    It seems like the 2950x from both of these sites, is really the processor to get from the threadripper lineup.

    it seems when gaming is taken to account, the best of both worlds is the 7900x

    And for gaming, and when you factor price as well, the 8700k, 8086, and slightly behind, the 2700x are the cards to get.

    Overall.... I'm a little disappointed in this release. Was much more impressed with the 2700x. It's likely since we didn't really get a true change in the manufacturing process or design of the chip, that the limitations of the 2990wx will probably be ironed out with Zen2 (this is Zen+ after all).
  • bill.rookard - Monday, August 13, 2018 - link

    Looking at it myself, yeah - these really aren't gaming CPUs by any stretch of the imagination, thus the lack of gaming benchmarks is perfectly understandable to me. As for the results of the benchmark results? I'm thinking the 2950x is the sweet spot. Lower power, lower latency, more power for the cores vs interconnects, and a much higher clockspeed makes it IMHO the better choice unless you have those fringe workloads which requires a bunch-o-cores.
  • shendxx - Monday, August 13, 2018 - link

    this guy come from Toms that said 7900x is best for both world, lol, when the graph from toms show clearly even on gaming, 2950x is equal on Minimum FPS with 8700k and only lose 3 to 10 FPS On Average,
  • apoclypse - Monday, August 13, 2018 - link

    I don't know. Gaming performance is the least thing I care about with this chip but that seems be all most tech press cares about, especially tech tubers. These chips are not for gaming. If anything these chips should be compared to Intel's Xeon line as it seems that is actually where AMD is aiming these at since they don't have a dedicated SKU for workstation chips like Intel. These are only marketed as HEDT chips because it gives AMD positive press, but if anything the ones who should be paying attention should be OEM high end workstation builders. In that regard Threadripper is more than compelling. It's higher clocked than Intel's Xeon chips, has more cores for less money, and still has all the pro level features that is needed for workstation level work.

    I think AMD should lean into that a bit more in their marketing but that stuff isn't sexy and it doesn't grab attention like marketing it towards rich and stupid "gamers", and the technorati who eat that stuff up.

    This is a workstation chip period, and should be treated, tested and benchmarked as such, imo.
  • Icehawk - Monday, August 13, 2018 - link

    If only the tier 1 vendors would offer TR workstations... I really wanted to purchase a few for work to use as VM hosts but my only real option is Xeon currently. The 32 core monster would likely make for a great VM host for mid-weight usage.
  • Lolimaster - Monday, August 13, 2018 - link

    Then build one yourself.
  • tmnvnbl - Monday, August 13, 2018 - link

    How did you measure power numbers for core/uncore? Did these validate with e.g. wall measurements? The interconnect power study is very interesting, but I would like to see some more methodology there.
  • seafellow - Monday, August 13, 2018 - link

    I second the ask...how was measurement performed? How can we (the readers) have confidence in the numbers without an understanding of how the numbers were generated?
  • GreenReaper - Wednesday, August 15, 2018 - link

    Modern CPUs measure this themselves. AMD itself has boasted of the number of points at which they measure power usage throughout its new CPUs. Check out 'turbostat' in the 'linux-cpupower' package - or grab a copy of HWiNFO that will show it.
  • Darty Sinchez - Monday, August 13, 2018 - link

    This here article be awesome. I is so ready to buy. But, me no have enough money so I wait for it sale.
  • perfmad - Monday, August 13, 2018 - link

    So is the 2990WX bottlenecking in Handbrake because of the indirect memory access for some cores? Would be interesting to know if that bottleneck can be worked around by running multiple encodes simultaniously, The latest Vidcoder beta uses the handbrake core and has recently added support for multiple simultanous encodes. Would be really appreciated if you had time to look into that.

    Also do you share the source file and presets you use for the handbrake tests so we can run them on our hardware to get a comparison? My CPU isn't one you've tested.

    Thanks for the review thus far.
  • AlexDaum - Monday, August 13, 2018 - link

    I think, the problem with the memory bandwidth cannot be easily fixed, as it isn't a Problem, that one Process uses to much memory, but one core on one of the dies without memory controller, needs to access the infinity fabric to get Data. When all of the cores are active and want to fetch data from memory, it would cause contention on the IF Bus, which reduces the available memory bandwidth a whole lot and the core is just waiting for memory.
    This is just my speculation though, not based on facts, other than the bottleneck.
  • Aephe - Monday, August 13, 2018 - link

    Those 2990WX Corona results! Can't wait to get a machine based on this baby! Holding up for TR2 release was worth it for me at least.
  • Ian Cutress - Monday, August 13, 2018 - link

    That benchmark result broke my graphing engine ! Had to start reporting it the millions.
  • melgross - Monday, August 13, 2018 - link

    It’s interesting. This reminds me of Bulldozer, where they made a bad bet with floating point (among some other things), and that held then back for years. This looks almost too specialized for most uses.
  • T1beriu - Monday, August 13, 2018 - link

    > We confirmed this with AMD, but for the most part the scheduler will load up the cores that are directly attached to memory first, before using the other cores. [...]

    It seems that Tomshardware says the opposite:

    >AMD continues working with Microsoft to route threads to the die with direct-attached memory first, and then spill remaining threads over to the compute dies. Unfortunately, the scheduler currently treats all dies as equal, operating in Round Robin mode. [...] According to AMD, Microsoft has not committed to a timeline for updating its scheduler.
  • Ian Cutress - Monday, August 13, 2018 - link

    Yeah, Paul and I were discussing this. It is a round robin mode, but it's weighted based on available resources, thermal performance, proximity of busy threads, etc.
  • JoeyJoJo123 - Monday, August 13, 2018 - link

    Maybe just user error, but all the article pages between Test Setup and Comparison Results to Going up Against Epyc, just have the text "Still writing...". I'm unsure if the article is actually still being written and was supposed to be published in this partial manner or if possible something was lost between writing and upload.

    In any case, kind of crazy how the infinity fabric is consuming so much power. The cores look super-efficient, but if the uncore can get efficiency improvements, that can help the Zen architecture stay even more efficient under load. Intel's uncore consumes a fraction of the wattage, but doesn't scale as well for multiple threads.
  • Ian Cutress - Monday, August 13, 2018 - link

    Still being written. See my comment at the top. Unfortunately travel back and forth from UK to SF bit me over the weekend and I lost a couple of days testing, along with having to take a full benchmark set up with me to SF to test in the hotel room.
  • JoeyJoJo123 - Monday, August 13, 2018 - link

    I understand, take your rest. You don't need to reply to me, I actually saw the reason after I posted.
  • compilerdev2 - Monday, August 13, 2018 - link

    Hi Ian,
    I have some questions about the Chromium compilation benchmark, since I was hoping to get the 2990WX for compiling large C++ apps. What version of Chromium is used? Is the compiler being used Clang-CL or Visual C++? Is the build in debug or release (optimized) mode? If it's release mode with Visual C++, does it use LTCG? (link-time code generation, the equivalent of LTO of gcc/clang). For example, if the build is Visual C++ LTCG, the entire code optimization, code generation and linking is by default limited to 4 threads. Thanks!
  • Ian Cutress - Monday, August 13, 2018 - link

    It's the standard Windows walkthrough available online. So we use a build of Chrome 62 (it was relevant when we pulled), VC++, build in release. It's done in the command line via ninja, and yes it does use LTCG.

    Destructions are here. They might be updated a little from when I wrote the benchmark. Out test is automated to keep consistency.

    https://chromium.googlesource.com/chromium/src/+/m...
  • compilerdev2 - Monday, August 13, 2018 - link

    With LTCG those strange results make sense - it's spending a lot of time on just 4 threads - actually majority of the time is on one thread for the Chromium case, it hits some current limitations of the VC++ compiler regarding CPU/memory usage that makes scaling worse for Chromium (but not for smaller programs or with non-LTCG builds). Increasing the number of threads from the default of 4 is possible, but will not help here. The frontend (parsing) work is well parallelized by Ninja, it's probably the reason why the Threadrippers do end up ahead of the faster single-core Intel CPUs. It would be interesting to see the benchmarks without LTCG, or even better, more compilation benchmarks, since these CPUs are really great for C/C++/Rust programmers.
  • Nexus-7 - Monday, August 13, 2018 - link

    Cool write-up on the uncore power usage! I especially enjoyed that part of the article.
  • johnny_boy - Monday, August 13, 2018 - link

    The Phoronix articles are more telling for the sort of workloads a 64 thread count would be used for.
  • sjoukew - Thursday, August 16, 2018 - link

    There is also an article which shows the difference in performance on windows vs linux for the Threadripper processors. It is amazing to see. https://www.phoronix.com/scan.php?page=article&...
  • ishould - Monday, August 13, 2018 - link

    TLDR; Unless you have a very specific need for 32 cores/2990WX, the 2950x is faster and cheaper than the former, even during tasks that traditionally scale well with cores. This is looking to be a good differentiator between Zen+ EPYC and 2990WX. Definitely looking forward to Zen+ EPYC tests!
  • Silma - Monday, August 13, 2018 - link

    "... it makes perfect sense for a narrow set of workloads where it toasts the competition."

    Can you please list those workloads?

    When would it make sense to purchase a Threadripper PC vs many 'normal processors' PCs for tasks that you can easily handle in parallel?
  • Aephe - Monday, August 13, 2018 - link

    For example people like me who use their computers for 3D rendering for one. (again: those Corona scores!) The more cores (and Ghz) + ram, the better!
  • jospoortvliet - Saturday, August 18, 2018 - link

    And linux/Unix use, like C++ or Rust developer. Or heavy vm user. https://www.phoronix.com/scan.php?page=article&...
  • beggerking@yahoo.com - Monday, August 13, 2018 - link

    I'm so glad the competition is back! its been stagnant for years now (at about 4 cores) with Intel dominating the market. For home labs/servers, this is great news! more cores for less without having to go the Xeon ES routes. :)
  • j1ceasar@yahoo.com - Monday, August 13, 2018 - link

    Can some one tell me who needs these, i don't as a normal consumer
  • cheshirster - Monday, August 13, 2018 - link

    "tell me who needs these"
    As a normal customer you don't need to know this.
  • Gothmoth - Monday, August 13, 2018 - link

    sorry but this handbrake numbers look wrong. no other website i have visited has seen such big differences.
  • twtech - Monday, August 13, 2018 - link

    I really would like to see some code compile benchmarks. Next to rendering and other graphic content related tasks, compiling code in large codebases is probably the 2nd most common use-case someone would consider buying a processor like this for, and it seems like there is a dearth of coverage for it.
  • jospoortvliet - Saturday, August 18, 2018 - link

    https://www.phoronix.com/scan.php?page=article&... has some.
  • nul0b - Monday, August 13, 2018 - link

    Ian please define how exactly you're calculating and deriving uncore and IF power utilization.
  • alpha754293 - Monday, August 13, 2018 - link

    I vote that from now on, all of the CPU reviews should be like this.

    Just raw data.
  • Lolimaster - Monday, August 13, 2018 - link

    To resume:

    Intel's TDP is a blatant lie, it barely keeps at TDP at 6c/6t, meanwhile AMD stick on point or below TDP with their chips, boost included :D
  • Lolimaster - Monday, August 13, 2018 - link

    Selling more shares from $1.65 now to $19 :D

    AMD Threadripper 2, ripping the blue hole.
  • Lolimaster - Monday, August 13, 2018 - link

    It seems geekbench can't scale beyond 16cores.
  • Lolimaster - Monday, August 13, 2018 - link

    WHERE IS CINEBENCH?
  • Lolimaster - Monday, August 13, 2018 - link

    And I mean CB15

    Also, for some reason CB11.5 MT seems to be broken for TR, it stops caling at 12cores.
  • mapesdhs - Monday, August 13, 2018 - link

    CB R15 is suffering issues aswell these days, at this level it can exhibit huge variance from one run to another.
  • eastcoast_pete - Monday, August 13, 2018 - link

    Thanks Ian, great article, look forward to seeing the full final version!

    My conclusions: All these are workstation processors, NOT for gaming; the Ryzen 2700X and the upcoming Intel octacore 9000 series are/will be better for gaming, both in value for money and absolute performance. That being said, the TR 2950X could be a great choice, if your productivity software can make good use of the 16 cores/32 threads, and if that same software isn't written to make strong use of AVX 512. If the applications that you buy these monsters can make heavy use of AVX 512, Intel's chips are currently hard or impossible to beat, even at the same price point. That being said, a 2950X workstation with 128 or 256 Gb of RAM (in quad channel, of course), plus some fast PCIe/NVMe SSDs and a big & fast graphics card would make an awesome video editing setup; and, the 60 PCIe channels would come in really handy for add-in boards. One fly in the ointment: AMD, since you're hamstringing TR with only quad-channel, at least let us use faster DDR4; how about officially supporting > 3.2 Ghz?

    Unrelated: Love the testing setup where the system storage SSD ( 1TB) is the same size as the working memory (1 TB). With one of these, you know you're in the heavyweight division.

    @Ian: I also really appreciate the testing of power draws by cores vs. interconnecting fabric. I also believe (as you wrote) that this is a much underappreciated hurdle in simply escalating the number of cores. I also wonder a. How is that affecting ARM-based multicore chips, especially once we are talking 32 cores and up, as for the chips intended for servers? and b. Is that one of the reasons (or THE reason) why ARM-based manycore solutions have not been getting much traction, and why companies like Qualcomm have stopped their development? Yes, the cores might be very efficient, but if those power savings are being gobbled up by the interconnects, fewer but broader and deeper cores might still end up winning the performance/wh race.
    If you and/or Ryan (or any of your colleagues) could do a deep dive into the general issue of power use by the interconnecting fabric and the different architectures, I would really appreciate it.
  • Lolimaster - Monday, August 13, 2018 - link

    I don't really see a point OCing the 2990WX, it seems quite efficient at stock setting with an average of 170w fully loaded, why go all the way to 400w+ for just 30% extra performance, it already destroys the 2950X/7980XE OCed to hell beyond repair.
  • Lolimaster - Monday, August 13, 2018 - link

    Threadripper 2990WX = Raid Boss
  • yeeeeman - Monday, August 13, 2018 - link

    Amazing performance from AMDs part. If you want to see a real review of 2990WX from a reviewer who understands how this CPU will be used, please check https://www.phoronix.com/scan.php?page=article&...
  • mapesdhs - Monday, August 13, 2018 - link

    Figured it would be those guys. 8) I talked to them way back when they started using C-ray for testing, after the original benchmark author handed it over to me for general public usage, though it's kinda spread all over the place since then. Yes, they did a good writeup. It's amusing when elsewhere one will see someone say something like, these CPUs are not best for gaming! Well, oh my, what a surprise, I could never have guessed. :D

    In the future though, who knows. Fancy a full D-day simulator with thousands of players? 10 to 20 years from now, CPUs like this might be the norm.
  • eva02langley - Tuesday, August 14, 2018 - link

    It is exactly what I said. If we don't have a proper test bed for a unique product like this, then the results we are going to provide are not going to be representative of the true potential of a CPU like this.

    Sites will need to update their benchmarks suites, or propose new review systems.
  • Gideon - Monday, August 13, 2018 - link

    Great article overall. The Fabric Power part was the most interesting one! Though you might want to check The Stilt's comments regarding that:

    https://forums.anandtech.com/threads/2990wx-review...
    and:
    https://forums.anandtech.com/threads/2990wx-review...
  • Icehawk - Monday, August 13, 2018 - link

    Ian, for in progress articles can they please be labelled that way? I would rather wait for the article to be complete than read just a few pages and have to check back hoping it has been updated.
  • mapesdhs - Monday, August 13, 2018 - link

    Ian, can you add C-ray to the multithreaded testing mix please? Becoming quite a popular test these days as it can scale to hundreds of threads. Just run at 8K res using the sphract scene file with a deep recursion depth (at least 8), to give a test that's complicated enough to last a decent amount of time and push out to main RAM a fair bit aswell.
  • abufrejoval - Monday, August 13, 2018 - link

    Ok, I understand it we are all enjoying this pay-back moment: Intel getting it on the nose for trying to starve AMD and Nvidia by putting chipsets and GPUs into surplus transistors from process shrinks, transistors that couldn’t do anything meaningful for Excel (thing is: Spreadsheets would actually be ideal for multi-cores even GPGPU, you just need to rewrite them completely…)

    But actually, this article does its best to prepare y’all for the worst: Twice the cores won’t be twice the value, not this time around, nor the next… or the one after that.

    Please take a moment and consider the stark future ahead of us: From now on PCs will be worse than middle class smartphones with ten cores, where it’s cheaper to cut & paste more cores than to think of something useful.
  • KAlmquist - Monday, August 13, 2018 - link

    I'm not sure AMD would have bothered with the 2990WX if it weren't for the Intel Core i9 7980XE. With 18 cores, the 7980XE beats the 16 core Threadripper 2950X pretty much across the board. On the other hand, if you running software that scales well across lots of cores--and you probably are if you're considering shelling out the money for a 7980XE--the 32 core 2990WX will be faster, for about $100 less.

    These are niche processors; I doubt either of them will sell in enough volume to make a significant difference to the bottom line at Intel or AMD. My guess is that both the 2990WX and the 7980XE were released more for the bragging rights than for the sales revenue they will produce.
  • eva02langley - Tuesday, August 14, 2018 - link

    You don't get it, it is a proof of concept and a disruptive tactic to get notice for people to consider AMD in the future... and it works perfectly.
  • KAlmquist - Thursday, August 16, 2018 - link

    That's what I meant by “bragging rights.”
  • eva02langley - Thursday, August 16, 2018 - link

    You are missing the business standpoint, the stakeholders and the proof of concept.

    Nvidia is surfing on AI, however the only thing they did so far is selling GPU during a mining craze, however people drink their coolaid and the investors are all over them. The hangover is going to be hard.
  • Lolimaster - Monday, August 13, 2018 - link

    If you're a content creator the Threaripper 2950X is you bitch, period.
  • MrSpadge - Tuesday, August 14, 2018 - link

    Ian, does the power consumption of uncore (IF + memory controller) scale with IF + memory controller frequency? I would expect so. And if not: maybe AMD is missing on huge possible power savings at lower frequencies. Not sure if overall efficiency could benefit from that, though, as performance and power would simulataneously regress.
  • dynamis31 - Tuesday, August 14, 2018 - link

    It's not all silicon !
    Windows OS and applications running on that OS may also be software optimised for more 2990WX workloads as you can see below :
    https://www.phoronix.com/forums/forum/phoronix/lat...
  • dmayo - Tuesday, August 14, 2018 - link

    Meanwhile, in Linux 2990WX destroyed competition.

    https://www.phoronix.com/scan.php?page=article&...
    https://www.phoronix.com/scan.php?page=article&...
  • eva02langley - Tuesday, August 14, 2018 - link

    I am beginning to ask myself if this is related to Windows. Or maybe the bench suites reliability toward such a unique product.

    But yeah, these results are insane.
  • MrSpadge - Tuesday, August 14, 2018 - link

    Crazy results, indeed. And quite believable, considering how well the 16 core TR fares in comparison in many windows benches. I suspect the scheduler is not yet tuned for the new architecture with 2 different NUMA levels.

    And for at least parts of the benchmarks I suspect something a lot less technical is happening: Phoronix can only bench cross-platform software for this comparison. However, hardly any Windows programmer is regularly building Linux versions. That leaves just another option: Linux programs which also got a Windows build. And considering how downright hostile Linux fans can be towards Windows and anything Microsoft-related, I wouldn't be surprised if the tuning going into these compilations was far from ideal. Some of these guys really enjoy shouting out loud that they don't have access to any Windows machine to test their build (which they only did to stop the requests flooding their inbox) and to shove down their users throat that Windows is a second class citizen in their world. This point is reinforced by the wierd names of many of the benchmarks - except 7-zip, is anyone using those programs?
  • GreenReaper - Wednesday, August 15, 2018 - link

    Most aren't dedicated benchmarks, they're useful programs being run as such:
    * x264 powers most CPU-based H.264/AVC video encoding. Steam uses it, for example.
    * GraphicsMagick is a fork of ImageMagick, one of which is used in a large number of websites (probably including this one) for processing images.
    * FFmpeg is for audio and video processing.
    * Blender is a popular open-source rendering tool.
    * Minion is for constraint-solving (e.g. the four-colour map problem).

    Many aren't the kind of things you'd run on a regular desktop - but a workstation, sure. They are CPU-intensive parallel tasks which scale - or you hope will scale - with threads.
  • NevynPA - Tuesday, August 14, 2018 - link

    Will there be results for WX chips in 'Game Mode' at various core/thread counts (6/12,8/16,12/24)?
  • jospoortvliet - Saturday, August 18, 2018 - link

    It has no game mode. Don't bother buying it for games...
  • jts888 - Tuesday, August 14, 2018 - link

    What is the methodology used for the core/uncore power breakdown? Where was a physical measurement or software reading taken, and what were the loads used?

    Furthermore, Zen uses single-ended signaling for IF links with alleged even further reduced power draw when in transient no-send states, so there should be at least two clearly explained tests done (i.e., both high and low inter-thread/core/sock bandwidth, with NUMA allocations detailed) before interconnect power breakdowns can be credibly presented as flat metrics of the architectures investigated.

    Although this review is still a work in progress, it needs some substantial improvements in clarity given the strength of the claims made and conclusions drawn.
  • ktmrc8 - Thursday, August 16, 2018 - link

    Let me add my voice to those asking for further elaboration on this point. I think it's very interesting, but I would like enough detail so that I could possibly replicate your data. In particular, I the charts showing power consumption decreasing as number of loaded threads increase counter-intuitive (at least for me!). Thanks.
  • Sahrin - Tuesday, August 14, 2018 - link

    The link power is a problem, but I get the feeling that nowhere near the power optimization went into IF as went into the cores.
  • notfeelingit - Tuesday, August 14, 2018 - link

    What's up with the 2950X crazy low score for the PCMark10 Startup Test? Is that repeatable?
  • crotach - Tuesday, August 14, 2018 - link

    So, 2700X looks like a clear winner here?
  • GreenReaper - Wednesday, August 15, 2018 - link

    For the average consumer, yes. It's a sweet spot. Heck, most would do fine with an APU. You don't expect a truck to win a race. Small engines tend to be more efficient; they're just limited in raw power.
  • witeko - Tuesday, August 14, 2018 - link

    hi, can we have some tests regarding data processing (spark, dask), machine learning (lightGBM/xgboost training), deep learning (i know there are GPUs) just to get a feeling (there are pre-made benchmarks for tensorflow) ? And also some reviews point to win10 vs linux differences for example in the zip test.
  • farmergann - Tuesday, August 14, 2018 - link

    Really should have included the Epyc 7401p as it's a serious contender in this price range (only $1,000).
  • 3DVagabond - Wednesday, August 15, 2018 - link

    When did you switch to this new benchmark suite?
  • Lord of the Bored - Wednesday, August 15, 2018 - link

    Still writing...
  • mukiex - Friday, August 17, 2018 - link

    Looks like it's no longer a problem! They deleted all those pages.
  • GreenReaper - Saturday, August 18, 2018 - link

    They're back again now.
  • abufrejoval - Wednesday, August 15, 2018 - link

    Separating CPU (and GPU) cores from their memory clearly doesn't seem sustainable going forward.

    That's why I find the custom chip did for the chinese console so interesting: If they did an HBM variant, perhaps another with 16 or even 32GB per SoC, they'd use the IF mostly for IPC/non-local memory access and the chance of using GPGPU compute for truly parallel algorithms would be much bigger as the latency of context switches between CPU and GPU code would be minimal with both using the same physical memory space.

    They might still put ordinary RAM or NV-RAM somewhere to the side as secondary storage, so it looks a little like Knights Landing.

    IF interconnects might be a little longer, really long when you scale beyond what you can fit on a single board and probably something where optical interconnects would be better (once you got them...)

    I keep having visions of plenty of such 4x boards swimming immersed in a tank of this "mineral oil" stuff that evidently has little to do with oil but allows so much more density and could run around those chips 'naked'.
  • Alaa - Wednesday, August 15, 2018 - link

    I do not think that testing only a single tool at a time is a good benchmark for such high core count architecture. These cores need concurrent workloads to showcase their real power.
  • csell - Thursday, August 16, 2018 - link

    Can somebody please tell me the difference between the ASUS ROG Zenith Extreme motherboard rev 2 used here and the old ASUS ROG Zenith Extreme motherboard. I can't find any information about the rev 2 somewhere else?
  • UnNameless - Friday, August 17, 2018 - link

    I also want to know that. I have the "rev 1" Asus rog zenith extreme and can't find any difference.
  • spikespiegal - Friday, August 17, 2018 - link

    Companies buy PC's to run applications and don't care about memory timing, CPU's, clock speed or any other MB architecture. They only care about the box on the desk to run applications and ROI, as they should. AMD has historically only made a dent in the low end desktop market because Intel has this funny habit of not letting chip prices depreciate much below $200. AMD does, so they occupy the discount desktop market because when you buy 10,000 general purpose workstations saving $120 per box is a big chunk of change.
    I'm looking at the benchmark tests and all I'm seeing is the AMD chips doing well in mindless rendering and other synthetic desktop tasks no one outside multimedia would care about. The i7 holds it's own in too many complex application tests, which proves that once again per core efficacy is all that matters and AMD can't alter the reality of this. Where is the VMware host / mixed guest application benchmark consisting of Exchange, SQL, RDS, file services, AD and other? You know, those things that run corporate commerce and favor high core efficacy? Nobody runs bare metal servers anymore, and nobody reputable builds their own servers.
  • Dragonrider - Friday, August 17, 2018 - link

    Ian, are you going to test PBO performance with these processors (I know, it was probably not practical while you were on the road)? Some questions popped up in my mind. Can PBO be activated when the processor in partial mode (i.e. 1/2 mode or game mode in the case of the 2990)? Also What does the power consumption and performance look like in those partial modes for different application sets with and without PBO? I know that represents a lot of testing, but on the surface, the 2990 looks like it could be a really nice all-round processor if one were willing to do some mode switching. It seems like it should perform pretty close to the 2950 in game mode and 1/2 mode and you have already established that it is a rendering beast in full mode. Bottom line, I think the testing that has been published so far only scratches the surface of what this processor may be capable of.
  • MattZN - Monday, August 20, 2018 - link

    If its idling at 80-85W that implies you are running the memory fabric at 2800 or 3000MHz or higher. Try running the fabric at 2666MHz.

    Also keep in mind that a 2990WX running all 64 threads with a memory-heavy workload is almost guaranteed to be capped out by available memory bandwidth, so there's no point overclocking the CPU for those sorts of tests. In fact, you could try setting a lower PPT limit for the CPU core along with running the memory at 2666... you can probably chop 50-100W off the power consumption without changing the test results much (beyond the difference between 3000 and 2666).

    It's a bit unclear what you are loading the threads with. A computation-intensive workload will not load down the fabric much, meaning power will shift to the CPU cores and away from the fabric. A memory-intensive workload, on the otherhand, will stall-out the CPU cores (due to hitting the memory bandwidth cap that 4 memory channels gives you), and yet run the fabric at full speed. This is probably why you are seeing the results you are seeing. The CPU cores are likely hitting so many stalls they might as well be running at 2.8GHz instead of 3.4GHz, so they won't be using nearly as much power as you might expect.

    -Matt
  • XEDX - Monday, August 20, 2018 - link

    What happened to the Chromium compile rate for the 7980XE? On it's own review posted on Sep 25th 2017, it achieved 36.35 compiles per day, but in this review it dropped all the way down to 21.1.
  • jcc5169 - Saturday, August 25, 2018 - link

    Intel Will Struggle For Years And AMD Will Reap The Benefits-- SegmentNext https://segmentnext.com/
  • SWAPNALI - Tuesday, August 28, 2018 - link

    nice place here thanks alot for this information please do more post here
    <a href="http://clash-of-royale.com/">play clash of royale</a>
  • Relic74 - Wednesday, August 29, 2018 - link

    Regardless of the outcome, I went ahead and bought the 32 Core version. As I run SmartOS, an OS designed to run and manage Virtual Machines, I decided to go this route over the Epyc 24. My setup includes the new MSI MEG X399, 32 Core TR, 128GB DDR4 RAM, 3x Vega Frontier (used, $1000 for all three, no one wants them but I love them), 1 X Nvidia Titan Z (used for only $700, an amazing find from a pawn shop, did not know what he had, had it marked as an XP). Storage is 2 x 1TB Samsung 970 Pro in Raid 0 and 5x 8TB SATA in Raid 5 with 8GB of cache on card.

    The system is amazing and cost me much, much less than the iMac Pro I was about to buy. Now though, I can run any OS in VM, including OSX, with a designated GPU per VM and cores allocated to them. This setup is amazing, SmartOS is amazing, I have stopped running OS's with every application installed, Instead I create single purpose VM's and just install one or maybe two applications per. So for instance when I'm playing a game like DCS, a fantastic flight simulator, only has DCS and Steam installed on the VM. Allowing for the best performance possible, no, the lost of any performance by running things in VM are so minuscule that it's a none issue. DCS with the Titan V runs at over 200 FPS at 4K with everything turned to their max values. I have to actually cap games to my gaming monitors 144Hz refresh rate. Not only that but I can be playing the most demanding game their is, even in VR, while encoding a media file, while rendering something in Blender, while compiling an application, all tasks running under their own VM like a orchestra of perfection.

    Seriously, I will never go back to a one OS at a time machine again, not when SmartOS exists and especially not when 32 Cores are available at your command. In fact, anyone who buys this CPU and just runs one single OS at a time is an idiot as you will never, ever harness it's full intention as no one single application really can at the moment or at least not to the point where it's worth doing it.

    Most games dont need more than 4 cores, most design applications can't even use more than 2 cores, rendering applications use more of the GPU than CPU, in fact the only thing that really tasks my CPU is SmartOS that is controlling everything but even that doesn't need more than 6 cores to function perfectly, heck, I even had it at 12 cores but it didn't utilize it. So I have cores coming out of the yin-yang and more GPU's than I know what to do with. Aaaaahhhh poor, poor me.

    This computer will be with me for at least 10 years without ever feeling that I need an upgrade, which is why I spent the money, get it right the first time and than leave it alone I say.

    Oh and the memory management for SmartOS is incredible, I have set it up where if a VM needs more RAM, it will just grab it from another that isn't using it at the moment, it's all dynamic. Man, I am in love.

    Anyway.....
  • Phaedra - Sunday, March 3, 2019 - link

    Hi Relic74,

    I enjoyed reading your lengthy post on the technical marvel that is SmartOS and the 32 Core TR.

    I am very much interested in the technical details of how you got SmartOS to work with AMD hardware. Which version of SmartOS, Windows, KVM (or BHYVE) with PCI passthrough etc?

    I am in the process of preparing my own threadripper hyper computer and would love some advice regarding the KVM + PCI passthrough process.

    You mention gaming in a VM so I assume that you used a Windows 10 guest via KVM with PCI passthrough?

    The following says SmartOS doesn't support KVM on AMD hardware: https://wiki.smartos.org/display/DOC/SmartOS+Techn...

    Did you build the special module with amd-kvm support:
    https://github.com/jclulow/illumos-kvm/tree/pre-ep...
    or
    https://github.com/arekinath/smartos-live

    I would appreciate any insight or links to documentation you could provide. I am familiar with Windows/Linux/BSD so you can let me have the nitty-gritty details, thanks
  • gbolcer - Wednesday, September 19, 2018 - link

    Curious why virtualization disabled?
  • Ozymankos - Sunday, January 27, 2019 - link

    Your tests are typical for a single core machine which is laughable
    please try to download a game with steam,play some music,watch tv on a tvtuner card,play a game on 6 monitors or 8 or 4 ,do some work like computing something in the background(not virus scanners,something intelligent like life on other planets)
    then you shall see the truth
  • intel352 - Thursday, July 18, 2019 - link

    Old article obviously, but wth, numerous benchmark graphics are excluding 2950x in the results. Pretty bad quality control.
  • EthanWalker28 - Monday, February 24, 2020 - link

    If you are looking for custom writing firm to help you out with your academic writing issues, then you have just found the right one. Now, you don’t have to worry about getting a failing mark simply because you have been accused of plagiarizing someone else’s work. Check this <a href="https://ewriters.pro/" rel="nofollow">ewriters.pro</a> Order essay online staying 100% safe and confidential.
  • kiphackman - Monday, August 3, 2020 - link

    Great review! I really trust your expertise. Please keep them coming. Thank you!
    https://www.krygerglass.com/locations/missouri/spr...

Log in

Don't have an account? Sign up now