AMD and Havok – Jon Peddie speaks

by Jon Peddie on 12 June 2008, 17:35

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qanph

Add to My Vault: x

Game play

Physics, as in destructible environments, adds an amazing depth and dimension to game play. To be able to knock down fences while driving a car, shoot out lights, or through fences, blow up buildings and get different debris formations each time, or bounce off walls and trees gives game play more realism and fun than the shiniest pixel (and mind you, that statement is coming from Dr Pixel).

So we want physics, like yesterday. The ISVs as usual are the hold up, it’s hard work to build such complex 3D models that have a physics attribute attached to almost every vertex. And if the ISV makes that investment she wants to know that there’s some hardware out there that can exploit it. And, as too many tests have shown, even a quadcore doesn’t do that good a job of it.

Physics, however, is highly vectorizable and as such fits nicely on a GPU architecture. But I as an avid game player am not willing to give up any GPU cycles, especially if I’m trying to run an image hog like Flight Sim X with all atmospherics and terrain turned on while displaying it on my 30-inch 2560 x 1600 monitor. Hell, I need all the GPU I can get just to run Bioshock with everything turned on.

More GPUs

So we need more GPUs, that’s the answer. But as has been shown, GPUs don’t scale well after two. So let’s put in three and use one for physics. And we can do that today easily by sticking in a dual GPU AIB in one slot and a single GPU AIB in the other slot. Or if you’re lucky enough to have an AMD four-slot Spider mobo or a triple play from Nvidia you’ve got the slots and most likely you’ve also got last year’s AIB that you haven’t given to your little brother or put up on eBay yet, so you’re good to go.

But you’re not, not yet anyway, because now we have to get the ISVs to expose the GPU to the physics code. And that, he says after catching his breath, is what the ATI and Havok announcement is all about – convincing the ISVs that not only will there be, but in fact there already is an installed base and that it’s safe to port Havok’s code and expose it to a GPU.

Now we have to get the ISVs to expose the GPU to the physics code

When will we see all this? Maybe, just maybe, this holiday season. In some of the latest DirectX 10 programs that have been built with more flexible architectures it will be possible for the ISVs to put out a patch that will exploit the GPU as a physics accelerator. And Nvidia and the PhysX ISV OEMs will do exactly the same thing.

Also, when these two GPU behemoths officially launch their new chips this month (even though all the web sites seemed to already have launched them) one of the demos you can count on will be physics acceleration.

I for one welcome it and say: “where the hell have you been already?” I want it, I want it now.

 

Company of Heroes : Opposing Fronts uses Havok



HEXUS Forums :: 6 Comments

Login with Forum Account

Don't have an account? Register today!
Nice article :thumbsup:
Of course, some of this material I've said already :P Although I was focusing on why GPU physics is more desirable than CPU physics at the moment, and why you can't easily use spare CPU cores to enhance graphics.

Personally, I still strongly believe that two independent systems are not the way to go at all. Although developers will now be assured of acceleration for at least one, they will still have to either support both, or lose out on half the market. For detailed physics implementations to take off in games, developers need to be assured that the vast majority of the market will support whatever system they choose to use. As much as I am not a supporter of a microsoft monopoly, I do believe that the best hope for physics depth lies in all the required tooling being made part of the next DirectX standard. Hardware drivers could then use whatever system they like to implement this, and a market would exist on top of the DirectX provisions for extensions to their offerings (although these would likely be unable to use GPU acceleration without clever trickery).
as long as its a microsoft OS then it will be a microsoft monopoly. The fact there isnt yet a clear cut way of handling physics that all hardware can abide to just emphasises that its a chicken with no head at the moment. as soon as it is brought into a future directx specification then we will see some coherency.

edit: i think microsoft were/are banking on the core increase making the point mute. even today looking towards the future thats not totally without merit
MadduckUK
as long as its a microsoft OS then it will be a microsoft monopoly. The fact there isnt yet a clear cut way of handling physics that all hardware can abide to just emphasises that its a chicken with no head at the moment. as soon as it is brought into a future directx specification then we will see some coherency.

edit: i think microsoft were/are banking on the core increase making the point mute. even today looking towards the future thats not totally without merit

Actually, I believe it's a two-headed chicken, not a chicken with no head. Also, the term is “moot point,” not “mute point”:

“Moot” is a very old word related to “meeting,” specifically a meeting where serious matters are discussed. Oddly enough, a moot point can be a point worth discussing at a meeting (or in court)—an unresolved question—or it can be the opposite: a point already settled and not worth discussing further. At any rate, “mute point” is simply wrong, as is the less common “mood point.”

Actually, that definition is a bit weak…'moot point' derives from ‘moot court,’ a classroom environment where future litigators argue old cases in an attempt to learn from them. Moot means pointless, because all of their arguing in moot court isn't going to change the outcome of the original case.
…anyway going back to the topic (chas' service to the English language er… noted) it does look like this will force cpu physics to the fore.