JEDEC says DDR5 and NVDIMM-P standards will be ready next year

by Mark Tyson on 3 April 2017, 12:01

Quick Link: HEXUS.net/qadfxn

Add to My Vault: x

AMD CPU users might have only just got DDR4 RAM support via the new AM4 motherboard platform but tech relentlessly marches on, with DDR5 now on the horizon. Just ahead of the weekend JEDEC, the industry association for standards in Solid State microelectronics, announced that both DDR5 and NVDIMM-P standards are "moving forward rapidly." It is forecasted that the standards will be published sometime next year.

DDR5 Ram will offer a number of advantages over its predecessor DRAM technologies as follows:

  • Double bandwidth of DDR4
  • Double density of DDR4
  • Improved channel efficiency compared to DDR4
  • Greater power efficiency

In addition to the above JEDEC says that DDR5 includes a "more user-friendly interface for server and client platforms," for those who wish to tweak performance / power management.

NVDIMM-P is a hybrid DRIMM technology designed to provide high capacity persistent memory modules for computing systems. The standard augments rather than replaces the existing (lower capacity but faster) NVDIMM-N to provide memory solutions optimised for cost, power usage and performance. Such hybrid technology could compete with Intel Optane perhaps.

A senior member of the JEDEC board said that progress on the new DDR5 and NVDIMM-P standards is going well. JEDEC will provide progress updates ahead of the publication of the finished standards next year. For example there's a JEDEC Server Forum event in Santa Clara, CA on Monday, 19th June, where each standard will be discussed in further detail.



HEXUS Forums :: 5 Comments

Login with Forum Account

Don't have an account? Register today!
Well hopefully they release some futureproof timings for once so we don't have to keep messing around with XMP and the like.

Then again I've never understood why they use 33MHz steps for their base clocks either; it creates some needlessly strange FSB : DRAM ratios when AMD and Intel use 100/200MHz base clocks.
CAPTAIN_ALLCAPS
… Then again I've never understood why they use 33MHz steps for their base clocks either …

If you'd been an overclocker 10 or more years ago you would - or indeed 6 or more years ago, given that Intel only settled on a 100MHz base clock with Sandy Bridge in January 2011. The earlier Core i chips were 133MHz, and Core 2 FSB could be 200, 266, 333 or 400 MHz.

AMD, OTOH, settled on 200MHz base clock when it did away with FSBs for Athlon 64, almost 15 years ago, and have been playing with unusual memory dividers ever since…!
scaryjim
If you'd been an overclocker 10 or more years ago you would - or indeed 6 or more years ago, given that Intel only settled on a 100MHz base clock with Sandy Bridge in January 2011. The earlier Core i chips were 133MHz, and Core 2 FSB could be 200, 266, 333 or 400 MHz.

AMD, OTOH, settled on 200MHz base clock when it did away with FSBs for Athlon 64, almost 15 years ago, and have been playing with unusual memory dividers ever since…!

2011 is a long time ago in the computing world; the DDR4 specs were finalised some 45 months later in September 2014 and Intel must have made their new platform structure known long before Sandy Bridge hit the shelves.
Not even on DDR4 yet….
CAPTAIN_ALLCAPS
… the DDR4 specs were finalised some 45 months later in September 2014 and Intel must have made their new platform structure known long before Sandy Bridge hit the shelves.

Oh, absolutely, not arguing that - just that a lot of tech standards are very slow to change, and it's not that long ago in real terms that mainstream CPUs were using a 133MHz base clock.

AFAIK the JEDEC DDR standards are intended to be a continuous evolution, so I suspect there's an underlying timing spec that uses a 33MHz (or possibly 133MHz) tick that's core to the specification and would be a major revision to alter. After all, the interface between the memory controller and the CPU isn't JEDEC's concern - they just specify how the memory clocks. So there's no good reason they should create specifications just to pander to the whims of x86 computing manufacturers. It's not even as if the majority of computing devices are x86…