AMD Radeon RX Vega 64 makes $100 loss at MSRP, says report

by Mark Tyson on 30 August 2017, 14:31

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qadk7i

Add to My Vault: x

The AMD Radeon RX Vega graphics cards for enthusiasts and gamers were launched just over a fortnight ago. Almost immediately after the RX Vega 64 became available there were reports of pricing shenanigans. In particular there was controversy about whether the entry level RX Vega 64 SKU's £449 price was merely an introductory special offer or launch pricing. After the initial flurry of sales r/etailers were seen to push prices up by at £100 or more but with 2x 'free games' bundled.

At the end of last week we heard via DigiTimes, that the price problem was merely down to demand, as per classic economics theory. Distributors and retailers were simply selling the new Vega cards for large margins above MSRP because the market tolerated it. Shortages were, according to industry sources, a result of AMD manufacturing partner difficulty in integrating HBM2 with the GPU.

In a report published by Fudzilla today the plot seems to thicken… According to its industry sources the MSRP set by AMD for the Vega 64 is simply too cheap for it to make any money. Fuzilla's insider says that due to the pricing of HBM2, substrate and chip packaging costs, AMD is losing at least $100 on every MSRP priced Vega 64 sold.

It is explained that AMD plan to lose a limited amount of cash on these initial sales and later work on the manufacturing tech and component prices to lower its costs. Fudzilla says that from October SK hynix will start to deliver HBM2 and the price will become much more favourable. The date ties in with the earlier DigiTimes reports of when extra RX Vega graphics card stock will start to arrive at retailers.

Looking at Scan today I see that it is actually possible to grab an in-stock Standalone 8GB MSI Radeon RX Vega 64 at £469.99 (see screenshot above). That's the same price as Scan's cheapest RX Vega 56, from Sapphire, which was also in stock at the time of writing.

Vega 56 flashing

If the AMD Radeon RX Vega 56 cards were, as MSRPs would suggest, significantly cheaper than their full 4096-core brethren it might be worth fiddling with the BIOS to get extra performance for zero £$£$.

Today VideoCardz reports that a ChipHell forum user was looking at the possibilities of upgrading a Vega 56 into a Vega 64 by modifying the BIOS. Computing history is full of examples of this kind of jiggery-pokery achieving results. However the result noted by 'KDtree' wasn't exactly as expected.

The BIOS mod didn't enable extra cores, but it did succeed in upping the default boost/base GPU clocks to 1545MHz / 945MHz respectively. In some simple user tests the revamped Vega 56 was just two per cent slower than a stock Vega 64. These are interesting results, and said to be achievable following a "simple mod". With RX Vega series cards having two BIOSes there is said to be little risk of bricking the card - but accidents happen so be extremely careful if you tinker.



HEXUS Forums :: 8 Comments

Login with Forum Account

Don't have an account? Register today!
“With RX Vega series cards having two BIOSes there is said to be little rick of bricking the card”

PICKLE RICK !!!!!!
I've been kind of laughing at the mess that is AMD's Vega release, but if this works out, and I can get ~1080 performance from a Vega 56 at €399, I might actually consider it.. I had 2x970's in SLI but one blew up last week.
Lets forget the high END Vega and just make a sub $150 vega APU for notebooks and budget desktops. I am sure a R460 category Vega will demolish intel HD graphics to planet pluto.
No wonder as AMD counted on SK Hynix and ended up having to source HBM2 from Samsung!!

I mean Vega was meant to be out at the end of last year.
If you can get similar performance by boosting clocks to the same level as the 64 but still without the extra cores enabled, I wonder how far you could go down that path before you really started to notice performance degradation.
I guess what im wondering is if AMD weren't saving costs on the design and manufacture by building a “one chip fits all” solution and instead built a dedicated gaming focused chip with less CUs and perhaps higher clocks, just how much more efficient could it have been?