What a day!
AMD has had a massive launch today. They have launched both its full Ryzen 3000 desktop CPU lineup, as well as its new GPUs, based on a totally new architecture – RDNA – Navi.
Special thanks to EposVox for helping out with CPU/GPU testing. Togehter, we've got a lot of coverage including a lot of the esoterica around Linux, streaming, encode and more.
- Level1 Radeon 5700/XT Review & Test
- Level1 3900x and 3700x Tested -- CPU Review Video
- Level1 Linux Related
- Level1 Linux Help Thread
- EposVox Launch Vid -- How fast is the 3700x and 3900x!?
- EposVox FASTER THAN 2080ti?! - AMD Radeon RX 5700 & 5700XT for Streamers & Content Creators (AMF UPGRADE?!)
- EposVox - Testing the MSI Tomahawk B450 with Ryzen 3700x & the Radeon 5700
We tested the Gigabyte Aorus Master (A+) and the Taichi X570 for launch. We will have full reviews of those up in the next few days.
If you're looking for GPU info, head over here as this article focuses mainly on the CPU side of things.
Make no mistake – the Ryzen 5 and up CPUs in the 3000 series aren't just incrementally improved – they are legends in the making.
It's all about the Core Wars
The Ryzen 1700 caught my attention in March of 2017. AM4 was a brand new socket in town. $329USD for an 8-core mainstream desktop CPU? It seemed absurd and yet it would be go on to be one of the most compelling CPUs in a decade. At the time a competing 8-core CPU from Intel on the high-end desktop segment cost about $1000 US.
The mainstream desktop performance CPU gauntlet had been thrown down by AMD. While AMD did not have a clock speed advantage, they offered more cores to offset that disadvantage. Would regular computer users find a use for 8 cores on a mainstream desktop? Would software adapt?
Two years since then Intel has released the overclockable 6-core 8600k and the 5ghz 8-core 9900K which both offer very strong single and multi-core performance.
Even some Intel insiders concede that AMD has become "formidable" in a recently-leaked internal memo; their expectation is that AMD would generally continue to win on multi-thread and high-core count parts.
It's Also About Single Threading
While 8 cores on the desktop from both Intel and AMD have opened up a number of possibilities, it's often the case that single-threaded, and "lightly-threaded" applications are more important to end-users. It should go without saying that a system maintaining responsiveness and snappiness, even under heavy load scenarios, is generally perceived by the end-users as "better" even though a less responsive machine may complete a background job faster.
It's also true that Intel's 14nm process is formidable; it's probably the best 14nm process in the world at this point. 5ghz clock speeds on x86 is no easy feat. We learned at 2019 Computex Intel plans to release a 5ghz all-core version of the 9900K (the 9900KS) and I suspect this is to try to use their clock speeds to maintain a competitive advantage.
One of the biggest questions everyone is trying to answer now, and in the weeks ahead as we settle in with these new CPUS, is "Does AMD have a single-thread performance advantage, or performance parity, with Intel?" The shortest answer is.. It's complicated, but basically yes.
Testing single-threaded and "lightly-threaded" apps for comparison between the two families of processors from both companies has been the most difficult aspect of this review.
The processors and platforms are very different; what each company is building for and optimizing to is very different. There is no truly unequivocal answer and it can get very technical the more you dive into it. I love that.
Getting to Know the Ryzen 3700X and the Ryzen 3900X
The main focus of this article is two CPUs:
|
Cores |
Threads |
Boost |
Base |
Price |
...compares to |
TDP |
Max 1t* |
Mac AC* |
Ryzen 7 3700X |
8 |
16 |
4.4Ghz |
3.6Ghz |
$329 |
Intel i7 9700K |
65W |
4.6 |
4.3 |
Ryzen 9 3900X |
12 |
24 |
4.6Ghz |
3.8Ghz |
$399 |
Intel i9 9900K |
105W |
4.65 |
4.2 |
*Observed Values with PBO set to "Enabled" w/XMP but no other tweaks. Note that this has changed a bit as of UEFI updates that came down on 7/6. Additional testing in the next few days.
There is also the Ryzen 7 3800X. I am currently in line at Microcenter to pick one of these up. I am guessing that it will perform similarly to how the Ryzen 7 3700X behaves with PBO and an overclock, but I cannot yet say with certainty.
Both of these CPUs ship with the RGB Wraith Prism CPU cooler. Unless you plan to use Precision Boost Overdrive (PBO), you probably do not need to buy an aftermarket cooler, especially with the 3700X.
For my test system, I also used a Fractal Celsius S24 All-in-one 240mm radiator with the 12-core Ryzen 9 3900X, which performed well and let that particular CPU really stretch its legs.
For overclocking the 3700X, I found that switching the fan speed from low to high on the physical switch on the Wraith Prism cooler was sufficient to allow 4.3ghz all-core boosts and 4.4 to 4.55ghz 1-thread boosts endlessly (on the test bench, anyway). It took quite a bit of fiddling to get there, though, and I am hoping this is improved via software updates.
The new AMD CPUs based on "Zen 2" have a lot of tweaks under the hood. Many of those tweaks really help with the single-threaded and "lightly-threaded" test scenarios. The L3 cache is monstrous – now called GameCache in the marketing material -- to call attention to the fact you have so much more of it than on any other mainstream CPU.
Is it weird to have a CPU that doesn't downclock for AVX workloads? While the AMD CPUs don't support AVX512 – improvements in Ryzen 3000 means that AVX256 can be serviced in a single cycle now.
DIMMs |
Ranks |
Official Supported Speed |
2 of 2 |
Single |
3200 |
2 of 4 |
Single |
3200 |
4 of 4 |
Single |
2933 |
2 of 2 |
Dual |
3200 |
2 of 4 |
Dual |
3200 |
4 of 4 |
Dual |
2667 |
I was able to test both systems with 64gb of Trident Z RGB memory, at up to 2933mhz. This is technically a mild overclock since these are dual-rank DIMMS (as the max supported speed is 2667), but it worked fine for me. YMMV.
It is worth noting that in synthetic benchmarks, like AIDA64, the memory write benchmarks will be reported fairly low. I was concerned about this initially, but in real-world scenarios this didn't seem to actually matter much.
Lies, Damned Lies, And Benchmarks
So my impression of the messaging from Intel is that, while AMD does have high core counts on the main-stream socket, Intel has a high-end desktop (HEDT) offering that matches or beats what AMD has (this is a different price class, of course) and that end-users should focus on real-world experiences and not the benchmarks.
If they mean that when I'm computing some insane pivot table or other background process that takes a bit, I'd kind of like for my computer to be fast, "snappy," and responsive no matter what is going on in the background.
My overall opinion of both first and second gen Ryzen was that the single thread performance here, while not as good as Intel, was certainly good enough for the value proposition. And the additional cores, cache, and chip arrangement meant that, in general, the system tended to be pretty responsive under load.
In 2019, with the launch of the Ryzen 9 3900X and the Ryzen 7 3700X, AMD is at parity with Intel for single-thread performance, even with a raw clock speed deficit, at least in the real-world scenarios I've been able to test.
Single thread scores in Cinebench, for me (after a bit of tweaking), were better on the Ryzen 9 system than my test Intel system – even when overclocked -- a 9900K-based system running at 5ghz all-core. Testing with CPU-Z's little built-in benchmark utility put the Ryzen 9 3900X system slightly behind the 9900K for single thread, but the Ryzen 9 system was more than 35% faster in the multithread test.
Generally, this was the pattern I saw throughout all testing. In real-world scenarios, AMD generally was at parity or did better. In synthetic benchmark scenarios, Intel might be slightly ahead for single-threaded or lightly-threaded apps (and with a 5ghz to 5.1ghz clock speed on the Intel side vs 4.6Ghz on the AMD side).
I think I've got no choice but to call this one for AMD, at least in real-world scenarios. It's generally the faster CPU now in all the "real-world" testing I've done so far.
So Long X299
X299 is Intel's current high-end desktop platform. It's the in-between CPU socket a computer user might go to when they're after something more powerful than an 8-core CPU but not quite as powerful (or expensive) as a high-end server CPU. Currently Intel mainly sells CPUs here from 8 to 18 cores. It offers more memory channels to support more system memory and more PCIe lanes for more peripherals. For anyone that was buying X299 just for compute speed, there was a trade-off: Single-core speeds. Even with single-core turbo, CPUs for the X299 chipset such as the $1700 18-core 9980XE couldn't quite match or even come close, really to the 1-core performance of a 5ghz clock speed 9900K CPU on the Z390 chipset. That 9900k is just an 8-core CPU, costing around $500.
The Z390 chipset doesn't (currently, if ever) support more than 8-cores.
So far our comparisons have been focused on the 9900K, but anyone buying X299 for 10 to 16 cores of compute, and not for increased memory capacity or more PCIe connectivity, would be remiss not to strongly consider AM4 and the 12-core Ryzen 9 3900X for their needs.
The performance and benchmarks of the 3900X are almost universally better for every compute scenario below 14 cores on the X299 platform. I expect the 16-core 3950X to at least offer compute performance parity with the 18-core Intel 9980XE, at less than half the cost, which is no small feat. Compared against Intel's' HEDT CPUs, AMD easily wins for both single- and multi-threaded compute-focused workloads. It's shocking, really.
I would expect AMD's next generation HEDT product to devour whole this entire market.
First Desktop CPU with PCIe 4.0; Chiplets in full-force
I have an amazing Talos II Power9 system with PCIe 4.0 from Raptor Computing – technically this is the first desktop computer with PCIe 4.0. But, thanks to companies like AMD and Gigabyte who have been pushing for PCIe innovation, I can now actually purchase a fast PCIe 4 SSD. Yes, the SSD is so fast it would bottleneck a PCIe 3.0 interface.
AMD's new graphics cards – the Radeon RX 5700 and RX 5700XT – are also PCIe 4.0. While most games won't take advantage of that yet, there are some benchmarks that can show it makes a difference.
The CPUs offer a total of 24 PCIe 4.0 lanes, and the X570 chipset will mux 8 PCIe 4.0 lanes (plus a lot of other connectivity like USB3, Sata, etc) into 4 of the 24 PCIe 4.0 lanes. That leaves 20 PCIe 4.0 lanes for direct-to-CPU connectivity. Motherboards can configure this as 4x lanes dedicated for a PCIe 4.0 NVMe and 8x lanes for two GPUs, or 16x PCIe 4.0 lanes for a single GPU.
Infinity Fabric, the internal interconnection machinery, has been reworked to support PCIe 4.0 clocks and bandwidth. A single Ryzen 5 3000 series CPU (and up) is composed of two to three chiplets – one to two CPU chiplets (2 compute chiplets for CPUs that have more than 8 cores) and one "I/O Die." The compute chiplets are fabricated on a TSMC 7nm process and connected to the I/O die, which is fabricated at a different fabrication foundry and on a 14nm process. All of the memory and I/O for the rest of the system goes through the I/O die.
This has the potential to introduce additional latency as the CPUs are no longer directly connected to system memory as was the case with first and second generation Ryzen products.
To offset that extra hop, AMD has doubled the L3 CPU cache to a whopping 64mb on the Ryzen 9 3900X (and the 3950X); 32mb on the Ryzen 7 3700x.
They didn’t just double the cache; there are major architectural improvements as well.
Infinity fabric speed can be decoupled from main memory, which is a new feature of Ryzen 3000. It adds a bit of latency unless the memory clocks are very high to make up for it.
The sweet spot for best performance seems to be around 3600Mhz memory speed with timings such as 14-14-14-34-1T or 16-18-18-36-1T. This means that Infinity Fabric is running at 1800mhz. Technically this is a bit of an overclock and the max supported fabric speed is 1600Mhz.
Latencies, Latencies, Latencies
The arrangement with the I/O die and the CPU chiplets means that there is an extra "hop" before data can make it from memory into the CPU (assuming it is uncached, of course). Potentially this is a worse situation than on, for example, the Ryzen 2700X which was directly connected to main memory. Caches do work, though, and having a lot of it is better than a little.
We used this tool:
https://github.com/ajakubek/core-latency
...to measure the core-to-core round-trip latency.
There are CCX-to-CCX latencies in a single chiplet and, in the case of the 3900X, chiplet-to-chiplet latencies which were a bit higher. Here is a table of the latencies we were observing.
TODO: Table
We were able to confirm minimum latencies to main memory of 68-70ns with our Trident Z Royal memory running at 3600 with 15-15-15-15-34-1T timings (slightly tweaked speeds over the XMP profile).
Even though the latency here maybe is a bit worse than the 2700X (and worse than the competing Intel platform) it didn't really seem to matter for gaming scenarios, surprisingly, and didn't matter at all with productivity scenarios.
The Ryzen 3000 series CPUs are even able to match Intel's performance at high framerate 1080p gaming – something that can be sensitive to latencies within the system and was perhaps a point of consternation around the Ryzen 2700X launch for some reviewers.
The ASRock Taichi x570 UEFI features a number of tunables deep in its menu system. It seems possible to enable an option in the UEFI that will make the CCX/CCDs behave as "near" NUMA nodes which may be useful in certain edge case scheduling scenarios. I will experiment with this a bit in a future video.
Gaming Performance
For gaming testing, we tested a bunch of games including many AAA and indie titles.
We also did in-depth testing using Sid Meyers: Civilization 6, Grand Theft Auto V, Ghost Recon: Wildlands, Shadow of the Tomb Raider, Monster Hunter World and Far Cry 5.
For comparison, we tested against the Intel i9-9900K and i7-9700K. We tested with MCE on for both systems, and our 9900K was running at 5ghz all core. This is not an unreasonable overclock for 98% of 9900Ks out there, provided your cooling is adequate, and we think best represents the level of performance someone spending on a 9900K would seek.
Intel also recently released the Performance Maximizer software which is a piece of software that will completely automatically find your maximum fully stable overclock. A 9900K running at stock leaves a lot of performance on the table. This software does help bring the performance delta between these higher end Ryzen 3000 CPUs and the 9900K much closer together. The 9900K at default speeds lags far behind the Ryzen 9 3900K (which is comparable in pricing) in most single-threaded and nearly all multi-thread real-world workloads.
We ran with default Windows 1903 security mitigations. Currently, AMD is much better in this regard. Ryzen 3000 has all hardware mitigations – no real performance loss from mitigations.
You can see even from the basic built-in CPU-Z benchmark (1t Reference score on the Intel 9900K of about 580 vs our score of about 530) that 1t performance since this sample was added to CPU-Z has overall declined since this sample was added.
The performance graphs speak for themselves – generally AMD is at parity for single thread (to be clear: they win some and they lose some). However, in real-world and gaming scenarios, AMD is generally better overall, especially on newer titles. The Intel systems do eke out a small victory here and there, but tend to be much more sensitive to background processes. It's not hard to imagine that during a long gaming session if something kicks off in the background you're going to notice it a lot more on the Intel platform than AMD.
In general, 8 core CPUs (and up) are not needed for gaming. AMD has been very careful to offer higher and higher max 1t boost clocks as you step up the Ryzen lineup, however. The implication here would be – is Ryzen 5 performance for 1-2 threads as strong? I plan to revisit gaming performance with a Ryzen 5 CPU I've picked up from Microcenter. While 1t max boost overclocking was not as strong as I'd hoped on the 3700x (~4.4ghz), I am hoping that Ryzen 5 users will have roughly the same performance for games as we see in the benchmarks.
AM4 is dead; long live AM4!
AMD has taken the AM4 CPU socket from 4 cores to 16 and to PCIe 4.0. I'm excited for a new socket; how much more could they possibly get out of AM4?
We were able to test backward compatibility of the 3900X in the MSI Tomahawk B450 board. Yes, a B450 board rocking 12 cores. It worked pretty well.
There was minor performance loss and the UEFI offered PBO, though I couldn't tell if PBO was really working as it should on the 12-core CPU. On the 3900X it would boost to 4.2-4.3ghz all core, which is impressive. It may be possible to get by with a mild overclock on this board, but I was extremely impressed that this worked and it worked reasonably well.
If you're still on that Phenom II or i7-2600k and just can't wait any more – opt for an X570 board. You'll be glad you did.
I believe that Auto-overclocking and the new PBO features are reserved for x570 boards only at this time. That may change via bios mods or hacks, or a board partner going rogue, but this kind of makes sense.
Don't expect PCIe 4.0 to work on older boards, either. I met with the engineers and while PCIe 4.0 might work on a particular board, it's not guaranteed to work on every board of that model. So I might have an Asus Crosshair Hero VII with slot1 that happens to work at PCIe 4.0 speeds.. But the number of PCIe bus errors I'd see in the Linux console would make me highly uncomfortable. The ASRock Taichi even, currently, exposes a number of PCIe 4.0 redriver tunables just because getting that exactly right can be such a huge pain from an engineering perspective.
Does this mean PCIe extension cables are dead? Certainly, I wouldn't recommend this for PCIe 4.0. It was hard enough getting a PCIe 3.0 cable that didn't introduce problems; PCIe 4.0 extension cables are nigh-impossible right now.
Be warned, AIB partners are focused on fixing bugs and solving issues with X570 right now. I'd say it will be around the end of July, at the earliest, before the older boards get the attention they need for best compatibility.
Precision Boost, Precision Boost Overdrive, Overclocking, Gotchas and Final Notes
Keep an eye on the Level1 forum for the most up to date information. When I initially got the press kit graciously supplied by AMD, there was immediately a UEFI update for all boards. This UEFI update was a regression in a lot of ways – IOMMU groups, max 1t boost speed, etc. (At least for me and my particular CPUs).
After updating the UEFI around 6/30, the situation improved quite a bit. Before the update, I was struggling to get even 1t performance on the 3700X at the advertised 4.4ghz boost; more like 4.2Ghz.
The Wraith Prism cooler bundled with the 3700X and 3900X is more than adequate for everyday use and even overclocking. The 3700X can be overclocked quite a bit with this cooler; the 3900X will benefit from an AIO cooler though not dramatically so, at least initially.
Precision Boost Overdrive is different for 3000 series CPUS, but it still works pretty well. Enabling it will allow your motherboard to communicate with the CPU about how much power can be delivered, for how long. This part of PBO isn't new.
What is new is Automatic Overclocking – you can specify up to an extra +200Mhz to the Max Boost frequency printed on the box – and that's treated as the new "Max Boost" speed by the CPU.
This sounds like it would be awesome because you allow the CPU to self-manage thermals, power and frequency while also simultaneously allowing it to exceed the max boos speed on the box.
For me, it was a bit of a mixed bag, perhaps colored by early experiences with the initial board UEFI. Subjectively (again, sample size of 1 here), the max clock speeds for the 3700X was around 4 to 4.1ghz all-core boost in almost all scenarios (even AVX workloads!) and 4.42 1t boost in most 1t workloads.
PBO generally improved the all-core clocks, but the 1t threads either did not improve, or regressed. I am hoping that this gets a little better with more updated UEFIs. Enabling Automatic Overclocking usually resulted in a minor performance regression, even with only a 25mhz bump.
This is also perhaps a loss on the silicon lottery side of things.
To be clear, at stock speeds this is still an incredibly impressive chip. My experience with Ryzen 1000 and Ryzen 2000 series chips was that AMD was incredibly good at binning their chips and that, generally, one just couldn't do better than just letting PBO do its thing.
My experience with the 3900x was somewhat different. For the 3900X, which is a beastly 12-core chip, I was finally able to achieve 4.5ghz all-core overclock stable (even for AVX workloads!) at 1.45v with a manual overclock and a lot of fiddling. PBO, Ryzen Master and UEFI settings to enable Overclocking in Precision Boost Overdrive + Auto Overclock didn’t really do quite as well, but I would occasionally see clocks of about 4.55ghz from the 3900X with everything opened up and PBO enabled. Like the 3700x, the 3900x was a little "shy" about clocking up to 4.6ghz, the Max boost clock printed on the box, especially on the initial UEFI versions. Later UEFI versions did improve this, and auto overclocking of +75mhz did seem to work.
With Auto Overclocking, I would say I saw a max of about 4.65ghz 1t performance. In the review guide provided by AMD, they suggested 4.3ghz all-core was not unreasonable, but with PBO just set to "enabled" I never saw anything above 4.25 all-core.
However, I was able to set a manual overclock of 4.5ghz, all core, with a voltage of 1.45v, and things were mostly stable. This yields a score of over 9100 in CPU-z's built-in mini-benchmark.
I am not sure what the maximum safe long-term voltage is for 7nm but 1.45v for long-term use strikes me as perhaps a little high, so I would not recommend you do this.
I am looking forward to testing these aspects of the 3800X.
Because of my experience, I am betting there will be more variability in other reviewers' evaluations of these products. I used an entire 40g tube of Arctic MX2 installing and reinstalling heatsinks for this testing. These CPUs are somewhat sensitive to mounting and pasting, which makes sense if you think about the chiplet design.
So, overall, I would say overclocking is still kind of limited even with the new PBO options and the +200Mhz auto overclock. Even under LN2, the clock speed was topping out around 5.4ghz at the AMD E3 event. It is nice to have the option, if you're lucky enough to have a chip that can do just that much more, though.
We have every reason to think that the 3700X and the 3900X have been very carefully binned from the outset. But it isn't like you need the overclocking to beat the competing Intel CPUS -- even overclocking my Intel 9900k to 5.1ghz didn't really deliver enough of a performance bump to make up the 1t lead that AMD has in applications like Cinebench R20.
If you pick up a new Ryzen 7 3700x or Ryzen 9 3900x CPU, one quick easy thing you can do is to run CPU-Z's default CPU benchmark. Set it to 1 thread. If you're getting a score of less than 500, something is probably wrong – try loading optimized UEFI defaults, setting XMP and/or remounting your cooler. You will almost certainly have to update your motherboard's UEFI -- a relatively painless process -- on day 1. Okay, you won't have to, but the single-threaded performance was not as high on the initial UEFI as UEFIs that showed up between 7/5 and 7/7 (and there may still be pending updates from 7/7 and/or 7/8). So I think the reviews around the internet are going to vary a few percentage points, especially around lightly-threaded apps. The cpu-z or cinebench test is a pretty easy test to verify that you've got a properly-performing system. So do it. :D
And, congratulations are in order for AMD. This product is making history. It is truly formidable.
Even if you bleed blue for legitimate reasons, you will have AMD and this processor to thank for, in the end, reawakening the blue giant and providing better products at better prices.