AMD RX 6000 Analysis: Snatching Defeat from the Jaws of Victory
Today, AMD launched its new RX 6000 series GPUs (to be specific, the highest end ones). I have been eager to see what RX 6000 can do, and you can probably tell from the title that I think AMD could have done a better job. RX 6000 represents large technological progress but the product is much less exciting.
The RX 6000 Series
Or, again, the RX 6800 Series, because AMD only launched the RX 6800 and RX 6800XT. AMD is also set to launch the 6900XT next month, but this is just a 6800XT with 8 more CUs and somewhat better binning.
RX 6000 Series
Base/boost clock speed (MHz)
16 GB GDDR6
16 GB GDDR6
16 GB GDDR6
Basically, AMD has just undercut Nvidia’s lineup by around $50, which is what AMD did with the RX 5000 series. This doesn’t result in an immediately impressive product but we haven’t considered the performance yet. It should also be noted that saving $50 on a $700 GPU is not nearly as significant as saving $50 on a $400 GPU, which was the advantage RX 5000 enjoyed.
Moving onto features, starting with the new Infinity Cache. This is the GPU’s 128 MB cache, and it allows for a more economical memory configuration. The RX 6800 and RX 6800XT have 16 GB of GDDR6 but on a very mid range 256 bit bus. Thanks to Infinity Cache, AMD doesn’t need to use a large 512 bit bus for GDDR6 memory or highly expensive HBM2. AMD seems to think every processor could use a ton of cache and it appears that they’re right.
Ray tracing also makes its way to AMD GPUs, but if the language of the slide deck is anything to go by, we probably shouldn’t expect leading performance. Actually, I’ll just come out and say it: ray tracing on RX 6000 is not competitive. But, it’s here, and that’s certainly better than nothing at all. Perhaps there’s also room for improvement in the drivers? Nvidia significantly improved RTX with updates, perhaps AMD could do the same.
AMD is also introducing Smart Access Memory (or SAM), which is only available on PCs that use an RX 6000 GPU, Ryzen 5000 CPU, and 500 series motherboard. It basically allows the CPU to directly access the GPU’s VRAM, and it is supposed to result in a nice performance gain. On paper, it sounds like a very nice deal, but in practice it’s kind of a gimmick.
We already know Big Navi is about twice as fast as the original Navi, but the big question is whether or not Big Navi beats Ampere, or if it’s simply worth buying at all. Let’s find out.
Decent Performance with Ray Tracing Off
For my analysis, I will be using the reviews of the following publications: TechPowerUp, Hardwareluxx, ComputerBase, Igor’s Lab, and Phoronix (specifically for Linux). I’m just going to go through these reviews one by one since there’s so much data to sift through. At the very end, I’ll tie everything into one single conclusion.
First, TechPowerUp. They found that the 6800XT was a hair behind the 3080 and maybe two or three hairs behind the 3090, while the 6800 was about a match for the 3070. If we look closer, we can see that there are a few games that are particularly nice to AMD: Far Cry 5, Hitman 2, and Strange Brigade. Additionally, RX 6800 and 6800XT GPUs lose ground against RTX 30 series GPUs when increasing the resolution, the RX 6800 in particular. TechPowerUp also tested SAM, and they found that overall it didn’t make a difference, but some games (such as Gears of War 5) had a positive impact while others (like Borderlands 3) had a negative impact.
Moving onto Hardwareluxx, we see a similar story; from a cursory glance, AMD might be doing slightly better against Nvidia in this review than in TechPowerUp’s. Hardwareluxx tested Watch Dogs Legion, which seems to be another good title for AMD, as Hardwareluxx found that both RX 6000 GPUs beat the 3090 at 1080p. As the resolution increases, AMD loses ground as expected, but it should be noted that, for whatever reason, Hardwareluxx decided to test this game with DLSS on, and this makes a huge difference at 4K. Death Stranding was also more positive for AMD in this review compared to TechPowerUp’s. Concerning SAM, Hardwareluxx also found that it usually didn’t change performance, as well as cases where performance went slightly up or down.
ComputerBases’s numbers fit neatly with TechPowerUp’s and Hardwareluxx’s overall, but they found that several titles that weren’t very good for AMD in the previous two reviews, like Borderlands 3, were actually pretty good for AMD, resulting in the 6800XT gaining ground against the 3080. I’m not entirely sure why this is, but it should be noted that ComputerBase’s test system used a 3900XT CPU while TechPowerUp and Hardwareluxx used a 9900K and 10900K respectively; perhaps there is a slight CPU bottleneck in ComputerBase’s review, though that doesn’t totally explain why the 6800XT was sometimes faster, even if it’s just by a hair. ComputerBase also tested SAM and they too found that it is overall underwhelming.
Igor’s Lab came up with similar data compared to the previous reviews, but for this review I’m primarily analyzing their data in the power efficiency section since they provide that data game by game. I should mention one important difference between this review and the others: the 6800 was actually slightly faster than the 3070 and 2080Ti when in other reviews these GPUs are about tied.
Finally, for the Linux section of this analysis, Phoronix finds that the 6800XT is just behind the 3080 (3090 was not tested), but in their opinion the 6800XT is clearly the better GPU thanks to superior open source drivers. Nvidia is notorious on Linux for not supporting open source drivers.
So, overall, RX 6000 falls short of the gaming crown, and I don’t expect the 6900XT to deliver the crown to AMD either. 1440p and 4K performance is particularly disappointing when these are the resolutions I expect people to use while gaming on these GPUs. SAM is also disappointing because it just doesn’t make a big difference. Maybe it needs more tweaking, but at the moment it’s just a small bonus (or negative sometimes) for people who happen to build an all AMD PC, not a real reason to consider building using only the newest AMD parts.
Personally, I’m especially disappointed given the expectations AMD set. I expected the 6800XT to tie the 3080 and it was slightly behind. The 6800 is even more disappointing since it did not clearly beat the 2080Ti like AMD said it would. Some competition is better than no competition, but AMD needs to stop making it hard for themselves to look good. They should have been honest.
Bad Performance with Ray Tracing On, but it’s not a Deal Breaker
In short, ray tracing on RX 6000 is bad. Across all reviews, the 6800XT was consistently behind the 2080Ti in ray tracing performance except in Dirt 5 and Watch Dogs Legion. This is extremely disappointing but I will give AMD a pass here because ray tracing isn’t mainstream… yet. Next generation, I expect most consumers will consider ray tracing as an important factor when they buy a GPU. More and more games are adding support for ray tracing and with the advent of the consoles, there should be a flood of these games coming within a year or two.
AMD Takes the Lead in Power Efficiency, Barely
I am pleased with AMD’s improvement in respect to power efficiency, where we see AMD take the lead from Nvidia, albeit barely; this is not terribly important for the desktop, but it is crucial for mobile, especially for gaming laptops. For this segment, I’m going to be relying on the review from Igor’s Lab, which provides power efficiency figures on a game by game basis.
Throughout Igor’s testing (excluding ray tracing benchmarks), the RX 6800 is consistently the most efficient GPU. The 3070 does at times tie or beat the 6800, but generally speaking the 6800 is the leader. The 6800XT also does well and consistently beats the 3080 and 3090, but the margin AMD achieves isn’t as large as I’d expect it to be (more on that later). What AMD lacks in raw performance they make up for in efficiency, which is not a bad thing to chase when Nvidia’s power consumption is starting to become prohibitively high. But, again, the lead is not quite as large as I think AMD needed it to be.
AMD Sabotages Themselves Yet Again
I’m not an AMD fanboy, but it’s hard to not get frustrated to see AMD fumble on yet another GPU. This time they failed to take advantage of a significant amount of clock speed left on the table, clock speed that probably would have given AMD the margin they needed to tie or even beat Nvidia. This is deeply disappointing when Ryzen has maxed out its clock speed since the 2000 series launched in 2018.
Nominally, the 6800XT boosts to ~2200 MHz, but it can be overclocked to about 2500 MHz. This however requires the user to set that clock speed target in software and raise the power limit. If the RX 6800XT isn’t going to be 2.5 GHz out of the box, the 6900XT needs to be, and it of course won’t be because it’s rated for the same clock speeds as the 6800XT. All that extra clock speed? Might as well not even exist.
I’m not even sure what AMD was thinking. It is in their best interest to launch the fastest GPU they can and they… just decided not to. It’s baffling. If the 6900XT at 2.5 GHz is faster than the 3090 (and I don’t expect it to be but it’s entirely possible), then AMD has just shot themselves in the foot because the 6900XT reviews will show it lagging behind due to a pointlessly restrained clock speed. Will the 6900XT even be faster than the 3080?
When I consider the whole product, the price, the performance, and the efficiency, my feelings on the new RX 6000 series are mixed and very complicated. Very quickly, I will answer the basic and most important question about RX 6000: should you buy it? Based on these reviews (we’ll have our own at some point), I give RX 6000 a slight recommendation. I have some more nuanced thoughts, however.
The product AMD has delivered is slightly competitive. This situation is like back when AMD had HD 5000 and Nvidia had GTX 400/500. Like with HD 5000, AMD is just behind Nvidia, but RX 6000 does not deliver the $100+ discount HD 5000 did compared to the GTX 400/500 series. To be fair to AMD, they lost a ton of money by selling HD 5000 (and pretty much every Radeon GPU ever) by pricing it too low, but that’s not relevant to the concerns of the consumer. RX 6000 is a more efficient product by a significant margin, but that’s not terribly important on the desktop. Personally, this efficiency is perfect for my ITX PC, but others won’t be so convinced.
The technology AMD has utilized is impressive. AMD doubled the performance of RDNA1, increased efficiency by ~50%, used a mid range 256 bit bus, and didn’t even need a new node. It’s a large leap for AMD, but Nvidia’s not far behind, even though Nvidia’s pace has become increasingly lethargic since 2017. When you think about it, Nvidia is just one node away from being back on top. This is a precarious position for AMD to be in.
The future of AMD’s and Nvidia’s gaming GPUs is unclear. RDNA3 should come to market sooner than Nvidia’s succeeding architecture (most likely Hopper) and the 50% efficiency boost from RDNA3 is promising. But, if Nvidia really wanted to, they could just advance to the most cutting edge node like they did with Pascal and blow AMD out of the water. If Nvidia just put Ampere on TSMC’s 7nm node, it would probably beat RDNA2 and could pose a challenge to RDNA3.
But back to the present. This was AMD’s fight to lose. The RTX 30 series is very underwhelming, second only to the RTX 20 series in this respect. Nvidia’s pace on desktop gaming GPUs has been slowing more and more ever since they launched Pascal in 2016. Yet AMD was still not able to outright beat Nvidia. AMD needed to make RX 6000 maybe another $50 cheaper to be the clearly better product. Or actually offer the full performance of the GPU out of the box. Or both. And they chose not to do either. I can understand the pricing (their prior business model was unsustainable), but not delivering the full performance this GPU is capable of is confusing. It’s not like AMD has to worry about being too hot and loud for once. I do want to be clear, RX 6000 isn’t a disaster, it’s just okay.
When I compare my conclusion to the conclusions of the reviews on which I based my analysis, I can see that I’m clearly more pessimistic. I think the reason why is that reviewers are expecting to see AMD to do the same thing with the GPUs as they did with their CPUs, that is, to take the lead by a significant margin when AMD releases RDNA3 based GPUs. But Nvidia is not Intel, and Radeon is not Ryzen. I just do not see that kind of optimistic future for AMD. A year or two from now, I expect Nvidia to be right back on top. I really, really want AMD to prove me wrong.
Liked it? Take a second to support Matthew Connatser on Patreon!