der8auer examines AMD X570 chipset power consumption

by Mark Tyson on 9 July 2019, 14:11

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qaebnd

Add to My Vault: x

Please log in to view Printer Friendly Layout

Overclocking expert der8auer has shared a new video which takes a deep dive into the power consumption of the AMD X570 chipset (German language version here). This is a bit of a hot topic with enthusiasts as AMD and its motherboard partners have gone against the grain to equip active fan cooled chipsets on almost all the motherboards we have seen become available thus far.

In the HEXUS editor's study of the X570 chipset, published on Sunday, it was observed that "X570 is clearly superior to X470 in every way other than power". We hear that it uses 13W at most, when put under the cosh, double the power consumption of X470 - a necessary 'evil' one might conclude. Most assumed the extra power was needed when it is pushed by the demands of new super-fast PCIe 4.0 devices (a major difference from X470). However, der8auer's initial testing shows that using PCIe 4.0 peripherals under stress doesn't make much of a difference to power consumption…

At the start of the video der8auer explains the difficulties in monitoring chipset power consumption - it isn't like a peripheral hanging off a single connector, there are multiple connections to the PSU on both sides of the motherboard. After some deft soldering of wires to strategic motherboard power transfer components he managed to get a bunch of spaghetti that could be monitored, added and converted to provide a chipset power consumption figure.

Moving on in the video to about 10mins 30s, der8auer says he spent a couple of weeks testing the X570 chipset against the X470 chipset, and he has some interesting tables to share. The main results table is reproduced below and you can see the top three (blue results) are from an X470 idle, with 1 fast NVMe SSD attached, and then with that SSD under load (running CrystalDiskMark).

You can see that the X570 power consumption starts from a much higher base, idling at 7.35W with no NVMe drive attached. The yellow highlighted bars show max power usage der8auer observed with various combinations of NVMe Gen 3 and SATA drives, and one included a PCI graphics card running the FurMark benchmark too. Finally the bottom two results, in red, show power consumption when there is a Corsair NVMe gen 4 SSD attached, and in/active. Der8auer is surprised by how little difference there is between NVMe Gen 3 and 4 under load in terms of power consumption "because everybody kept telling us (at Computex) that the chipset has much higher power consumption because of PCI Express Gen 4." In summary the OC expert couldn't pin down why the chipset requires so much extra power in actuality. Hopefully things will become clearer as boards are tested by more tech review sites like ourselves, it is still early days.

Last but not least, der8auers says he "really have no idea what is going on with these (X570) cooling solutions". He says that he managed to raise chipset power consumption in his testing to near 10W by attaching multiple devices and stressing them all. Then, with "almost no airflow" and a tiny chipset heatsink (pictured above) sat replacing the active fan cooler, the temps peaked at 74°C.

Please stay tuned for several X570 chipset motherboard reviews that we have in the pipeline at HEXUS.



HEXUS Forums :: 19 Comments

Login with Forum Account

Don't have an account? Register today!
Why would it be an m.2 that pushes the power up. Lane wise, it is insignificant. Put in a pair of x16 gen 4 cards and rev them up. The difference between 3 and 4 is so tiny, because percentage wise, as a piece of the entire pie, the amount is minuscule
Tunnah
Why would it be an m.2 that pushes the power up. Lane wise, it is insignificant. Put in a pair of x16 gen 4 cards and rev them up. The difference between 3 and 4 is so tiny, because percentage wise, as a piece of the entire pie, the amount is minuscule
Yes, but if an x16 graphics card only losses 1-2% of its potential speed going from PCIE 3.0 to 2.0, it follows that it will be very very hard to get a graphics card to really push PCIE 4.0.
NVMe drives on they other hand are currently using all which a PCIE 3.0 x4 slot can deliver.
kompukare
Yes, but if an x16 graphics card only losses 1-2% of its potential speed going from PCIE 3.0 to 2.0, it follows that it will be very very hard to get a graphics card to really push PCIE 4.0.
NVMe drives on they other hand are currently using all which a PCIE 3.0 x4 slot can deliver.

There is a benchmark that AMD used to demonstrate the difference PCIE 4.0 makes over PCIE 3.0 with the Navi GPUs. Not sure if this benchmark actually saturates PCIE 4.0 x16 but it's a good start.
I think he may have forgot something ..the gen4 nvme was run on it's own .. and got a load of 8.86 .. now if you ran that with the bottom yellow line would you not have to add them up ?
or is it a case of one or the other ?
afiretruck
There is a benchmark that AMD used to demonstrate the difference PCIE 4.0 makes over PCIE 3.0 with the Navi GPUs. Not sure if this benchmark actually saturates PCIE 4.0 x16 but it's a good start.

If you create a benchmark that just tests bandwidth then yes, you see an improvement with a PCIE 4 card on PCIE 4 over 3. That's all it does, it doesn't suggest there is any kind of improvement when bandwidth is not the limit, which it isn't in any game or other test.