Micron GDDR6 production will begin in H2 this year

by Mark Tyson on 7 February 2017, 12:31

Tags: Micron (NASDAQ:MU)

Quick Link: HEXUS.net/qadd23

Add to My Vault: x

Micron recently published some slides and a webcast providing information about its progress in various sectors of the memory market. You can check out the Micron source presentation files on the firm's investor relations site (PDF, see page 28 onwards for Graphics memory slides) which I found via VideoCardz.

Last year Micron brought out GDDR5X, a highly tuned version of GDDR5, which ended up being installed on Nvidia's flagship consumer card - the GeForce GTX 1080. GDDR5X pushed the envelope to provide up to 10GB/s per pin. Now we have learnt that Micron has moved forward the development of GDDR6, initially planned to launch in 2018, with production starting in H2 this year. GDDR6 is said to be capable of 16GB/s per pin. That's not up to HBM2 capabilities but it is a good alternative option due to its expected price advantage and ease of implementation.

GDDR6 isn't just faster than the GDDR5X it will be replacing, Micron says that it will use 20 per cent less power. Such efficiency is great for mobile PCs with their limited untethered battery life.

With the move forward in production schedules Micron is expected to release GDDR6 to manufacturers before the year is out. It estimates that GDDR6 will almost completely replace GDDR5 in PC and games console graphics by 2020.



HEXUS Forums :: 10 Comments

Login with Forum Account

Don't have an account? Register today!
Is this too little too late? After 10 years of stagnation in the GDDR development because there was no need to as there was no viable competition, only with the advent of HBMv1 did they bring out GDDR5X and now that HBMv2 is literally about to drop does a GDDR6 get poked out.

As GDDR5X was a very Intel-esque “do just enough to be ahead of the curve”, I have very little faith in the GDDR6 and is a complete non-starter for me.
Someone made a comment elsewhere that brought this into perspective, though. At the sacrifice of some die space, GDDR5X is still competitive on speed vs HBM2 and as far as I understand, considerably cheaper.
Case in point: if Vega 10's prime card has two stacks of HBM2 at 204GB/s, totalling 408GB/s, then it's still behind the GTX1080 which has, to be fair, been out for a bit now. P100's 720GB/s is reliant on using 4 stacks, something that likely won't be available on a mainstream or consumer card for a while.

Fast, neat little GDDR chips will be good on cheap cards for a while yet, IMO.
Ozaron
… At the sacrifice of some die space, GDDR5X is still competitive on speed vs HBM2 and as far as I understand, considerably cheaper. ….

I'm not convinced it's “considerably” cheaper. It certainly will be cheaper, but if it was that cheap I suspect nvidia would've used it throughout its range. The fact that only the top card in the range gets GDDR5X suggests it's still quite expensive - all the “cheaper” cards are making do with standard GDDR5…. And of course for the total cost of the card you've got to offset the interposer/HBM costs against the simplified PCBs since you don't have to run all those memory traces through them … I suspect the cost differential really isn't that significant…

Besides, it's not just “some die space”, it's also PCB space and power budget. When AMD were releasing Fury X they were talking about power savings in the region of 20W - 30W: that's 10% of the total power budget, which can either be used to make a lower power card (Nano @ 180W had excellent perf/watt), or ploughed into boosting the GPU clocks and getting higher absolute performance.

Ozaron
Case in point: if Vega 10's prime card has two stacks of HBM2 at 204GB/s, totalling 408GB/s, then it's still behind the GTX1080 …

Erm, GTX 1080 peak theoretical memory throughput is 320 GB/s - so Vega's 2-stack HBM2 implementation will have over 25% more bandwidth available…
I'm not convinced cost factors into anything about the 1080/70 - looking at how much nvidia wants for them, I think the lack of GDDR5X is an artificial limitation to milk more money from consumers
Xlucine
I think the lack of GDDR5X is an artificial limitation to milk more money from consumers

But that's just purely speculation. GDDR5X is expensive, but how much more than GDDR5, I'm not sure.
You also can't hide from the fact that they're producing the fastest graphics cards right now.