Note: the following is a modified and shortened version of the transcript Jim used for his video. This transcript has been revised for an article format and is not a 1:1 copy. The original video can be watched here.
Alright guys how’s it going?
Just under 3 years ago, near the beginning of 2017, the fastest mainstream desktop CPU – the “Kaby Lake” i7-7700K arrived with 4 cores and 8 threads. That had been the case since 2008 when Intel launched their Nehalem architecture with their Core “i7” branded 900-series CPUs, all of which had 4 cores and 8 threads. Due to a complete lack of serious AMD competition, in subsequent years Intel actually took the opportunity to scale down performance in the segment, creating the “i3” branded CPUs which were dual cores with hyper-threading but no turbo – and the “i5” branded CPUs which were quad cores with turbo but no hyper-threading.
For 9 long years, if you wanted more than 4 cores you had to move up to the “High End Desktop” platform with expensive motherboards and chips like the $999 i7-980X.
A couple of months after the Kaby Lake i7-7700K launched (still in early 2017), the first generation Ryzen series launched and changed the status quo. The fastest mainstream CPU – the R7 1800X – now had 8 cores and 16 threads and held its own against Intel’s HEDT line at competitive prices. At the bottom end of the new market, the Ryzen R3 1200 had 4 cores, though still only 4 threads.
Intel of course had to respond somehow and in fact they had a plan of sorts before Ryzen 1000 launched. Arriving with their “Coffee Lake” series of chips would be an increase in core count to 6, with the flagship i7-8700K carrying hyperthreading for 12 threads total. i5’s were now 6 cores, 6 threads and CPUs like the i5-8400 launched to great reviews, though back then I looked with some suspicion at just how long even 6 non-hyperthreaded cores would last.
That was only 2 years ago at the end of 2017. A few months later, AMD launched their 2nd-gen Ryzen with the 6 core 12 thread R5 2600 becoming the new entry-level in the segment. Intel were again forced to slap on a couple more cores at the top of the market, and alongside their 8 core 16 thread 9900K also came new “i9” branding, while their quad cores were now reduced to i3’s.
But I reckoned even 8 cores 16 threads wouldn’t cut it for long and would soon be seen as “mid range”. I of course had long speculated on the likelihood of 16 core Ryzen 3000 CPUs while others said that was pie in the sky. Indeed, AMD only launched with the 12 core R9 3900X instead this July – but with the promise of 16 cores coming later.
Later is now here and the R9 3950X launches today with availability on the 25th of November.
Nearing the end of 2019 and AMD’s 3rd Gen Ryzen family is now complete. 6 Cores with 12 threads is still the entry level and prices remained at $200 through a lack of decent Intel competition but at the top end we now have 16 cores and 32 threads, an absolutely monstrous mainstream desktop CPU that Intel has no hope of contending with, as the benchmarks will show.
Looking at the final member of Ryzen 3000 – 16 cores 32 threads as mentioned with a base clock of 3.5GHz and “up to” 4.7GHz boost. AMD noted during their press call that AGESA 1004 is required so look out for BIOS updates for your motherboards.
A mammoth 72MB of cache, that’s including L2 now of course and a very satisfying 105W TDP. 16 cores in 105W is something else and I look forward to seeing some power testing vs the 8-core 3800X.
Now moving on to the entire Ryzen 3000 SKU list and we see a rather cheeky comparison vs the Core i9 9920X. Intel just launched their followup 10000-series with vastly superior performance per dollar so this 9920X can safely be ignored. AMD will obviously claim that the 10000-series aren’t yet available, but neither is the 3950X yet.
This is really all about making the 3950X at “only” $750 look much better value that what it is. Not difficult to do when comparing it to an extremely overpriced Intel CPU. The real “competition” for the R9 3950X is the i9-9900K and now the 9900KS around the $500 mark but those CPUs only compete in gaming.
This slide shows the improvement in single threaded performance for Cinebench R20 depending on SKU, 1% is pretty much it over the 3900X.
Turning over the page, we see “Effortless 1080P Gaming” with AMD keen to show that vs the 9900K there really isn’t a whole lot in it, even showing a few wins in Fortnite, Hitman and Shadow of War. I’ll have a check through the press reviews for evidence of those results. I had noted in a couple of recent videos that Skylake-X didn’t game particularly well and here we see some evidence of that, with the 9920X falling quite far behind on occasion. Expect the same thing from the 10000-series high end desktop chips.
And this is an interesting point because obviously when you have 16 cores on the desktop, you’re really in the realm of high-end desktop use cases. What this slide shows is that you needn’t give up gaming performance to get there. 1000 and 2000 series Threadrippers created amazingly well value but could flop somewhat in gaming, just like Skylake-X.
As we see, the 3950X is a monster at content creation – that looks around 40% faster than the 9900K on average but note however there are quite a few raytracing benchmarks to take advantage of Zen 2’s vastly improved FPU.
This handbrake performance – an integer benchmark – looks less impressive at “only” 18% ahead, but I’m fairly certain AMD aren’t making the best case for their CPU based on settings here.
Now… this one is pretty mind-boggling. They’ve chosen best cast in Cinebench R20 as we saw in the previous slide however the performance per Watt vs Intel’s ancient 14nm is clear to see. Closing in on 2 and half times better and with absolute wall power some 30W below even the 9900K, AMD’s 7nm efficiency is in another league altogether. They are vastly faster while consuming less power. This is what we should see from a good architecture on 7nm vs a good architecture on 14nm. Radeon Technologies Group, please take note.
Of interest to some of you may be that the 3950X does not include a cooler and AMD in fact recommends a 280mm or greater all-in-one.
The last slide left of note on the 3950X is this slide is on AMD’s new “eco mode feature” which essentially allows you to run your 16 core 32 thread chip at 65W instead of 105W, reducing temps…though to be frank by less than I’d have expected, power by a whole lot and performance drops a bit too obviously but depending on your needs you might find this a worthwhile trade-off. Note that this feature will be available on all 3rd gen Ryzen CPUs.
And that was the 3950X – it’s the final word in mainstream desktop CPUs.
For some time now AMD’s Athlon brand has been reserved for the entry-level desktop market – still on the same AM4 platform as Ryzen however – and today AMD announced the Athlon 3000G, a dual core, 4 thread CPU with Radeon Vega 3 graphics. Compared to the previous generation Athlon 200GE they’ve squeezed out another 300MHz core clock and 100MHz graphic clock, all at the same 35W TDP. On top of that the price has dropped from $55 to only $49 and this one will be available from the 19th of November.
Looking at performance, AMD has chosen the Pentium G5400 as “competition”, though I use that word loosely as in fact the Pentium gets thrashed over a variety of different benchmarks even though it’s a locked CPU costing almost 50% more. Again, from this AMD slide we can see that there are no obvious compromises – the new Athlon is faster in single thread, much faster in multi-thread and the 3 Vega CUs are a country-mile ahead of the weak Pentium IGP.
As I said, the Pentium is also hamstrung due to being a locked CPU, and when overclocked the 3000G pulls even further ahead. What’s more, AMD are including a 65W cooler which is a lot of headroom for overclocking this 35W part. The memory can be overclocked too.
Now, I’m gonna run through some analysis on this part because clearly what we have here is an absolute game-changer at the entry level. It’s faster than the Pentium in everything, much faster in gaming, it’s much cheaper and it comes with an overspecced cooler. Before I started on YouTube, I ran a PC building and exporting business – the vast majority of systems I built were these entry level “gaming” systems designed to be affordable as possible while allowing for future upgradeability. Margins in this business are razor thin, like this $73 bucks Pentium here might be available for say, $65 bucks from a PC Parts distributor.
Imagine now if you can get much better performance for a lot cheaper AND a free cooler on top. To a system builder like I was, a part like this is a real game-changer. It’s $24 cheaper and the cooler is probably worth another $6 so for the same PC part – the CPU, where the profit per part was maybe $7 or $8, it’s now almost $30. How the builder decides to do it is up to them, maybe they take $15 for themselves and drop PC prices by $15 or maybe they try to take the whole profit for themselves, regardless of that it’s just a win overall.
Not only that, but this Athlon is so much easier to market. You can see the difference in playability here – of course you’d be smart and choose Fortnite at 720p, showing the Pentium lagging badly while the Athlon has playable frame rates. Imagine you’re on eBay or some other marketplace and you show your low-cost PC playing Fortnite playably. It’s a massive selling point.
This part should absolutely clean up the whole entry level. Intel clearly has nothing that they can compete with. What’s more, Intel know it, recent comments from them have revolved around how they are losing market share at the entry level. That makes sense because in the type of CPU shortage they’ve been under for years, the first sacrifices will be at the entry level. But make no mistake about it… this Pentium and all the other Pentiums around will still be outselling the competing AMD parts by a large margin.
Given what I’ve just said, why is that?
Well, first of all it’s the platform – the AM4 motherboards. Even if AMD had 100 million of these 3000G’s to sell, the yearly available AM4 motherboards would barely be 1/5th of that I reckon. There’s a chicken and egg situation here where the mobo guys don’t want to go in too deep with AMD platforms, yet without them AMD can’t sell the parts they need to sell in order to improve that. Growth is glacial because of this and even vastly superior parts don’t seem to matter.
The 2nd issue however is that there are not and never will be 100 million of these 3000G’s to sell. The reason for that is that this part is actually based on their Picasso chip which covers mobile chips like the 3750H and desktop chips like the 3400G. This is a 210mm2 chip. Look at the number of SKUs that are based on it.
As I said, Intel has basically given up competing at the entry-level but AMD cannot possibly get enough of these parts to make a difference. They really need a new design for this market, 213mm2 is too large even for 14nm.
Here is the full Raven Ridge/Picasso die with 4 cores and 11 CUs. Remember this 3000G has 2 cores so half of this and only 3 CUs. You can count the CUs easily on this die, 11 of them from top to bottom. How much die space could be saved on a part that only had 3 or 4 CUs and 2 cores instead of 4? The GPU portion must be at least 50mm2 and maybe another 20mm2 for half the CPU. Overall with some moving bits around you could be talking a 120mm2 chip compared to a 210mm2 chip.
Let’s go over to the old favourite, the die per wafer calculator. 300mm wafer, let’s say the Picasso die 20 by 10.5, close enough… and 0.1 defects on the now very mature GlobalFoundries 12nm process.
That would be 213 dies and in fact I would wager that all these Vega 3 graphics parts we see comprise these 49 defective dies instead. So yeah let’s just say that, 49 of these on a wafer.
A 120mm2 chip, let’s say 12mm by 10mm would have 434 good dies on the wafer and 55 defective. There’s a really big difference between 210mm2 and 120mm2 as you can see. If we then run these numbers through the silicon cost calculator, lets say 3000 bucks now and the cost per die to AMD would be just under $13 for Picasso, half of that for this hypothetical smaller die.
14nm and 12nm mask sets should come in under $20 million and maybe closer to $10 million. They would need to sell around 2-3 million more of these chips to break even.
Worth it? It’s worth trying is what I’d say. The last massively successful AMD chip was Bobcat, the tiny ~80mm2 APU they built at TSMC back in 2011. That chip sold around 30 million units in one year and killed Intel’s woeful Atom netbook processors almost overnight.
There are other considerations too, for example branding. Sure – sell these APUs for $50, or better still – sell them for $55 with $5 rebate on each unit – so long as the PC carries AMD branding. I know for a fact that as a system builder and exporter, I’d have taken you up on that offer. AMD has to do this stuff – simply selling the chips is not enough and never has been.
But it’s safe to say that I really like this part. It’s all about the price and the domination over the competitor obviously and it’s just another example of how “unfair” the CPU world is when such a compelling part won’t see the sales it deserves.
And finally, it’s time for what many will consider the real star of today’s show, with the introduction of 3rd Generation Threadripper.
According to AMD, the high end desktop market is worth $1.2 billion dollars and also according to AMD, at various points over the past couple of years some Threadripper models were actually the top selling HEDT chips. $1.2 billion dollars here will mean revenue so profits a fair bit less… though when you see the prices, you might wonder.
But first of all, something of a surprise. Clearly I have been something of a critic on AMD’s decision to “premiere” Threadripper with “only” 24 cores. It seems that somebody at AMD is actually listening, finally, and they will now launch with two models of 24 cores and 32 cores.
This was always a financial decision rather than a technical one and the prices of both chips betrays that truth. Recently I’ve speculated that while AMD could sell the 24C chip at $1000, they’d be more likely to aim for around $1300. I wasn’t too far off and I wasn’t too surprised to see them squeeze us just a little bit harder than I’m comfortable with at a $1400 price point for the 3960X.
AMD really do value those full 8-core chiplets as the 32C 3970X shows. 2 grand for that one and it should be noted that’s 43% higher cost for only 33% more cores. The 4.5GHz boost clock further betrays AMD’s strategy as these are clearly not top-drawer chips. In case you were wondering, the TDP of both parts is 280 Watts and think about that compared to what we see with the 3950X which is clearly in another level silicon-wise at 105W for only a slightly lower base clock.
It’s disappointing to me to see such low quality silicon being sold at such a high price, however in raw performance terms clearly there is nothing even remotely close to either of these CPUs.
Compared to Intel’s 18-core 9980XE, both new Threadrippers dish out a thrashing that essentially makes Intel’s “high end desktop” look more like an i3. Even thought that’s Skylake-X, Cascade Lake-X will not change the result by much. What will be interesting to see is how well these new Threadrippers game in comparison, though obviously you’re probably not buying one for that purpose. There will be a video from AMD showing performance, however it will mostly go over these benchmarks from what I was able to ascertain.
Now something else that will upset some of you I’d wager and that is the new platform, TRX40. I actually leaked that name 3 months ago on my Patron discord but I didn’t bother to mention it in a video. Simply put though, new socket for Threadripper 3000 means x299 and x399 is now pretty much a dead end. I know some of you bought X399 in preparation for Threadripper 3000 will be quite irritated by the lack of backward compatibility and what’s more, if Rome drops into Naples motherboards I can’t see any good reason why Threadripper 3000 couldn’t drop into Threadripper 1000 and 2000 series motherboards.
AMD claims doubling PCIe lanes between the CPU and chipset from 4 to 8 was the first reason and then proceeded to use that “scalability” word we all hate as reason #2.
Now to finish this off, it will be kinda weird for some of you that I spent more time talking about the Athlon than I did the other two, far more illustrious members of AMD’s chip family. I really like to see AMD offering much better parts at much cheaper prices. I know many of you AMD shareholders will balk at such an idea and I understand why. In the end, that’s just me and I wish AMD would do it everywhere throughout their lineup.
As it is, they’re really pushing it with some of their prices. $750 bucks for the 3950X? I can just about accept that. I get why this HEDT-performing part costs so much – it’s because it’s so far ahead of anything else around today and far, far ahead of anything we’ve ever seen in this space. I want to congratulate AMD on such a clean kill and their complete domination and also for simply offering this part. They didn’t have to offer this part at all to be frank, but they did and the price is something I can live with because of that.
But there’s still a huge market that they need to increase their presence in and they won’t do it at those prices. They need to incentivize the OEMs, make it such a clean kill on pricing that none of them can refuse. Intel has nothing – they can do nothing, they have no response. And we see this everywhere in the current CPU landscape, the Athlon cleanly kills the Pentium, Ryzen 3000 clean kills on performance but isn’t overwhelmingly better on price – prices can be adjusted to make that a clean kill overall too.
And then there’s Threadripper 3000. I was so excited about the first generation Threadripper, 16 cores, almost unbelievable. Even the $1000 price tag couldn’t take away from it because Intel’s prices were absolutely ludicrous at the time. What was it, $1700 for the 6950X? A 10-core CPU.
Today, Threadripper 3000 disappoints me. Sure the performance is even more crushing than before, but the price-performance… I’m not certain. I am certain that the 3970X isn’t worth $2000 based on price/performance, looking at AMD’s own charts shows that, however I’ll say what I said about Intel’s 6950X a few years ago in that if you need the best performance, you only have one choice and it will be worth it to you.
One issue I have here though is, we know this isn’t the best AMD can do. We know that they can launch a 48-core and then a 64-core if they wanted to. What’s more, looking at the names and we have room for a 3980X and 3990X.
Whether or not they will bother to launch those? Who knows. It’s unbelievable how far ahead they are on performance, absolutely incredible in fact and the only reason our minds aren’t blown by this is because we’ve been living it the past few years.
But there you have it. Absolutely crushing AMDomination on every front. Even when Intel were so far ahead during the Bulldozer days you could always just about make a case for AMD at the entry level. Now? The only mercy left for Intel is AMD’s pricing and perhaps a lack of real killer instinct in the latter’s case. That will come with time and money, though.