Investigation finds Intel Kaby Lake-G is more Polaris than Vega

by Mark Tyson on 10 April 2018, 10:11

Tags: AMD (NYSE:AMD), Intel (NASDAQ:INTC)

Quick Link: HEXUS.net/qadsju

Add to My Vault: x

The news of Intel's Kaby Lake-G processors was one of the biggest revelations in the world of tech in late 2017. Last November Intel announced that it would combine its high performance CPU cores with a discrete AMD Radeon GPU and HBM2 graphics on a multi-chip-module. At CES 2018 the product was launched and it was revealed that the AMD Radeon graphics on board were custom designed, based upon a GCN 5th Generation Vega core with up to 24 CUs. This would offer a maximum of 1,536 RX Vega-class shaders connected to 4GB of HBM2 memory over the EMIB interface, we learned.

In the presentation slides Intel referred to the GPU components as Radeon RX Vega M GH graphics (24 CUs), and Radeon RX Vega M GL graphics (20 CUs). Thus it was clear that Intel was leveraging AMD Radeon RX Vega technology in these Kaby Lake-G chips. However over the last week or so some investigations have muddied those clear waters.

Late last week PCWorld took a close look at the Intel Kaby Lake-G and found itself conflicted over whether the AMD GPU within was really Vega or whether it was closer to Polaris. Interestingly, it found that certain Sys-Info tools identify the GPU as Polaris. AIDA64 considers the Hades Canyon NUC with a Core i7-8809G inside to come with a Polaris 22 GPU equipped, for example. Yet the same tool correctly identifies a Ryzen 3 2220G as being based on Vega, and a Radeon RX 580 and a Radeon RX Vega 56 card were correctly flagged. PCWorld also considered Microsoft DirectX features and found the Kaby Lake-G chips lack DirectX 12.1 support, an advertised level of support for Vega.

Yesterday Tom's Hardware probed the Kaby Lake-G Polaris or Vega quandary further. It was discovered that another headline feature of RX Vega, Rapid Packed Math from the Next-Generation Compute Engine (NCU), doesn't appear to work with the Intel Kaby Lake-G. There are already some AAA games leveraging Rapid Packed Math, for example Far Cry 5 and Wolfenstein 2: The New Colossus.

What it comes down to is that the semi-custom GPU delivered by AMD for the Kaby Lake-G series offers some RX Vega features but not all of them. Of the official highlights of RX Vega, we now know that Rapid Packed Math is absent, but other features are definitely present - such as the HBM2 memory and High-Bandwidth Cache Controller.

HEXUS hasn't yet had a Kaby-Lake-G system through the labs but others have, and whatever the branding of the AMD GPU component, Kaby Lake-G seems to have lived up to expectations and delivered pleasing results.



HEXUS Forums :: 20 Comments

Login with Forum Account

Don't have an account? Register today!
Another pointless ploy by Intel to make their CPUs more expensive for anyone with a dedicated graphics card. Not to mention the stress on the fan!
Both the PS4 PRO and XBox One X GPUs have a mixture of Polaris and Vega features too.
CAT-THE-FIFTH
Both the PS4 PRO and XBox One X GPUs have a mixture of Polaris and Vega features too.

Which makes sense. I'm sure AMD provided whatever Intel asked for, and there will be features like fast double precision that make no sense for Intel. From the timing, I think we can assume they cut and pasted the graphics cores from either a games console or Ryzen. There are also features like video transcode which I presume are handled by Intel's UHD630 core so can be omitted from the Vega companion die, unless the Vega die is general purpose enough to turn up elsewhere.
DanceswithUnix
Which makes sense. I'm sure AMD provided whatever Intel asked for, and there will be features like fast double precision that make no sense for Intel. From the timing, I think we can assume they cut and pasted the graphics cores from either a games console or Ryzen. There are also features like video transcode which I presume are handled by Intel's UHD630 core so can be omitted from the Vega companion die, unless the Vega die is general purpose enough to turn up elsewhere.

The PS4 PRO GPU has FP16,whereas the XBox One X GPU lacks FP16 but uses Vega derived shaders AFAIK.

Edit!!

Another consideration could be drivers - I assume its easier for Intel to manage drivers based on a slightly older GPU feature set??
00oceanic
Another pointless ploy by Intel to make their CPUs more expensive for anyone with a dedicated graphics card. Not to mention the stress on the fan!

You seem to be missing the point. These chips are for laptops and very small form factor machines where discrete graphics aren't an option. You're not going to be able to buy one of these to drop in a s1151 motherboard - they're BGA only (i.e. soldered to the motherboard).

The point is the tightly integrated components gives you a simpler, smaller deisgn footprint, and also allows Intel to play a few power management tricks to balance the distribution of power between the CPU and GPU - something you can't do with separate CPU and dGPU.

TDPs vary between 65W and 100W, so well within the capabilities of modern cooling technology, I assure you ;)