Gigabyte publishes AMD Ryzen 9 3950X overclocking manual

by Mark Tyson on 7 October 2019, 13:11

Tags: Gigabyte (TPE:2376), AORUS, AMD (NYSE:AMD)

Quick Link: HEXUS.net/qaeekd

Add to My Vault: x

Gigabyte's Aorus X570 overclocking guide PDF has been found to be an interesting source of pre-launch AMD Ryzen 9 3950X testing data. The linked document looks at the OC possibilities open to users of the Gigabyte X570 Aorus Master motherboard, with an AMD Ryzen 9 3950X processor installed, 16GB of DDR4 3200MHz RAM, and a liquid cooler with 360mm radiator.

In the document Gigabyte notes that the 3950X has a Max Boost frequency of 4.7GHz - but that figure applies to just two cores. It then walks users through pushing their mobo/CPU combo to achieve the highest stable speeds with all 16 cores processing. There are the usual warnings about warranties and overdoing voltage tweaking.

After various clock, voltage, and memory adjustments in the Aorus BIOS, in the dedicated 'Tweaker Tab', users are encouraged to go through stability checking and stress testing. Whether you achieve success or not you might want to revise the tweaks to be gentler or fiercer.

In Gigabyte's own tests it was found that "the AMD Ryzen 9 3950X can hit around 4.3GHz using around 1.4V Vcore". If you go to the end of the PDF, you will read that this is the limit for many samples Gigabyte tested but it did get one stable at 4.4GHz, and it shared a Cinebench R15 run at this speed (score 4475cb).

On the topic of Cinebench benchmarking, the 4.3GHz OC processor achieved 4384, which is a 452 point difference - better than 10 per cent - compared to stock. An Intel-based rival, such as the Core i9-9980XE achieves around 3,700 points in Cinebench R15.

Click to zoom image

Remember, the AMD Ryzen 9 3950X was due to have arrived in September but has been delayed to November for some reason or other. It will likely launch alongside the latest third gen Threadripper CPUs next month.



HEXUS Forums :: 7 Comments

Login with Forum Account

Don't have an account? Register today!
While I'm not against overclocking (looks at my ageing 3570k with decent OC) I'm wondering if all core OC is a good idea for gamers? Wouldn't all core OC just increase the chip temps and limit boost so you no longer hit the max single core speed? I know games are beginning to get better at threading however I'm sure most games still benefit from boosting say 2 cores higher than 16 to a lower degree? If you need raw CPU multithreaded grunt this does not apply of course (rendering etc) but for todays games I think it does…
cheesemp
While I'm not against overclocking (looks at my ageing 3570k with decent OC) I'm wondering if all core OC is a good idea for gamers? Wouldn't all core OC just increase the chip temps and limit boost so you no longer hit the max single core speed? I know games are beginning to get better at threading however I'm sure most games still benefit from boosting say 2 cores higher than 16 to a lower degree? If you need raw CPU multithreaded grunt this does not apply of course (rendering etc) but for todays games I think it does…
To be fair I wouldn't necessarily say gamers are the main target for the 3950x, I'd say they're more aimed at ‘home’ content creators and 3D etc.
Would I overclock all the cores, probably not as it will reduce the life expectancy at least a little if you're working it heavily with rendering and encoding etc
LSG501
cheesemp
While I'm not against overclocking (looks at my ageing 3570k with decent OC) I'm wondering if all core OC is a good idea for gamers? Wouldn't all core OC just increase the chip temps and limit boost so you no longer hit the max single core speed? I know games are beginning to get better at threading however I'm sure most games still benefit from boosting say 2 cores higher than 16 to a lower degree? If you need raw CPU multithreaded grunt this does not apply of course (rendering etc) but for todays games I think it does…
To be fair I wouldn't necessarily say gamers are the main target for the 3950x, I'd say they're more aimed at ‘home’ content creators and 3D etc.
Would I overclock all the cores, probably not as it will reduce the life expectancy at least a little if you're working it heavily with rendering and encoding etc

I would think that, at least in part, it *is* aimed at gamers, since it will have the highest advertised clocks of any of their CPUs, and we all know gamers want every little edge they can muster to pull the most FPS. (Or at least that's the logic behind halo products.)

It does seem like have differentiated cores on higher core count CPUs would make sense, especially with Windows having better and better support for core affinities. Being able to clock higher on, say, 4 cores, while the other 12 are “high enough” seems like it would be the best of both worlds.
globalwarning
LSG501
cheesemp
While I'm not against overclocking (looks at my ageing 3570k with decent OC) I'm wondering if all core OC is a good idea for gamers? Wouldn't all core OC just increase the chip temps and limit boost so you no longer hit the max single core speed? I know games are beginning to get better at threading however I'm sure most games still benefit from boosting say 2 cores higher than 16 to a lower degree? If you need raw CPU multithreaded grunt this does not apply of course (rendering etc) but for todays games I think it does…
To be fair I wouldn't necessarily say gamers are the main target for the 3950x, I'd say they're more aimed at ‘home’ content creators and 3D etc.
Would I overclock all the cores, probably not as it will reduce the life expectancy at least a little if you're working it heavily with rendering and encoding etc

I would think that, at least in part, it *is* aimed at gamers, since it will have the highest advertised clocks of any of their CPUs, and we all know gamers want every little edge they can muster to pull the most FPS. (Or at least that's the logic behind halo products.)

It does seem like have differentiated cores on higher core count CPUs would make sense, especially with Windows having better and better support for core affinities. Being able to clock higher on, say, 4 cores, while the other 12 are “high enough” seems like it would be the best of both worlds.

Indeed for gamers I suspect the higher clocks on fewer cores would be optimal. I wonder if it boosts based on workload? If you're doing some mighty rendering work and are loading all the cores, it may decide that clocking all cores to a lower level is better than boosting to all.

I also wonder how much of this is to stay within the TDP envelope? We all know that Intel cores do a lot of their boost behaviour to stay within certain TDP specifications rather than due to real thermal limits.

One thing I would say (which I think you were also suggesting) is that gamers wanting “the edge” is often just wasted money. Anandtech showed that for gaming an i3 paired with a better GPU is a far better spend. I think once you've maxed out the GPU, throwing monies at the CPU often results in spending a fortune for little appreciable gain.

I look at something like this and go “yeh, I could absolutely afford it but am I ever going to come close to taxing it? Isn't something like this wasted on someone like me?” I just don't render videos whilst I'm gaming and playing HDR 4K on a different monitor next to me.

I think for all but the wealthiest gamers, this kind of stuff is best left alone (even if the budget allows it) and the money saved and put towards the next upgrade or towards upgrading things which are often overlooked. Like sound. Or a HD floppy drive rather than that DD drive most people are sporting.
globalwarning
I would think that, at least in part, it *is* aimed at gamers, since it will have the highest advertised clocks of any of their CPUs, and we all know gamers want every little edge they can muster to pull the most FPS. (Or at least that's the logic behind halo products.)

It does seem like have differentiated cores on higher core count CPUs would make sense, especially with Windows having better and better support for core affinities. Being able to clock higher on, say, 4 cores, while the other 12 are “high enough” seems like it would be the best of both worlds.

Both intel and AMD chips do this as part of their boost behaviour