Benchmarking Physics
We've had a lot of responses about the benchmarking procedures we used in our first PhysX article. We would like to clear up what we are trying to accomplish with our tests, and explain why we are doing things the way we are. Hopefully, by opening up a discussion of our approach to benchmarking, we can learn how to best serve the community with future tests of this technology.First off, average FPS is a good measure of full system performance under games. Depending on how the system responds to the game over multiple resolutions, graphics cards and CPU speeds, we can usually get a good idea of the way the different components of a system impact an applications performance.
Unfortunately, when a new and under used product (like a physics accelerator) hits the market, the sharp lack of applications that make use of the hardware present a problem to consumers attempting to evaluate the capabilities of the hardware. In the case of AGEIA's PhysX card, a sharp lack of ability to test applications running with a full compliment of physics effects in software mode really hampers our ability to draw solid conclusions.
In order to fill in the gaps in our testing, we would usually look towards synthetic benchmarks or development tools. At this point, the only synthetic benchmark we have is the boxes demo that is packaged with the AGEIA PhysX driver. The older tools, demos and benchmarks (such as 3DMark06) that use the PhysX SDK (formerly named Novodex) are not directly supported by the hardware (they would need to be patched somehow to enable support if possible).
Other, more current, demos will not run without hardware in the system (like CellFactor). The idea in these cases would be to stress the hardware as much as possible to find out what it can do. We would also like to find out how code running on the PhysX hardware compares to code running on a CPU (especially in a multiprocessor environment). Being able to control the number and type of physics objects to be handled would allow us to get a better idea of what we can expect in the future.
To fill in a couple gaps, AGEIA states that the PhysX PPU is capable of handling over 533000 convex object collisions per second and 3X as many sphere collisions per second. This is quite difficult to relate back to real world performance, but it is appears to be more work than a CPU or GPU could perform per second.
Of course, there is no replacement for actual code, and (to the end user) hardware is only as good as the software that runs on it. This is the philosophy by which we live. We are dedicated first and foremost to the enthusiast who spends his or her hard earned money on computer hardware, and there is no substitute for real world performance in evaluating the usefulness of a tool.
Using FPS to benchmark the impact of PhysX on performance is not a perfect fit, but it isn't as bad as it could be. Frames per second (in an instantaneous sense) is one divided by the time it takes to render a single frame. We call this the frametime. One divided by an average FPS is the average time it takes for a game to produce a finished frame. This takes into account the time it takes for a game to take in input, update game logic (with user input, AI, physics, event handling, script processing, etc.), and draw the frame via the GPU. Even though a single frame needs to travel the same path from start to finish, things like cueing multiple frames for rendering to the GPU (usually 3 at most) and multithreaded game programming are able to hide some of the overhead. Throw PhysX into the mix, and ideally we can offload some of this work somewhere else.
Here are some examples of how frametime can be affected by a game. These are very limited examples and don't reflect the true complexity of game programming.
CPU limited situations:
CPU: |------------ Game logic ------------||---- GPU: |---- Graphics processing ----| |----
The GPU must wait on the CPU to setup the next frame before it can start rendering. In this case, PhysX could help by reducing the CPU load and thus frametime.
Severely GPU limited situations:
CPU: |------ Game Logic ------| |--- GPU: |-------- Graphics processing --------||---
The CPU can start work on the next frame before the GPU finishes, but any work after three frames ahead must be thrown out. In the extreme case, this can cause lag between user input and the graphics being displayed. In less severe cases, it is possible to keep the CPU more heavily loaded while the frametime still depends on the GPU alone.
In either case, as is currently being done in both City of Villains and Ghost Recon Advanced Warfighter, the PhysX card can ideally be added to create additional effects without adding to frametime or CPU/GPU load. Unfortunately, the real world is not ideal, and in both of these games we see an increase in frametime for at least a couple frames. There are many reasons we could be seeing this right now, but it seems to not be as much of a problem for demos and games designed around the PPU.
In our tests of PhysX technology in the games which currently make use of the hardware, multiple resolutions and CPU speeds have been tested in order to determine how the PhysX card factors into frametime. For instance, it was very clear in our initial GRAW test that the game was CPU limited at low resolutions because the framerate dropped significantly when running on a slower processor. Likewise, at high resolutions the GPU was limiting performance because the drop in processor speed didn't affect the framerate in a very significant way. In all cases, after adding the PhysX card, we were easily able to see that frametime was most significantly limited by either the PhysX hardware itself, AGEIA driver overhead, or the PCI bus.
Ideally, the PhysX GPU will not only reduce the load on the CPU (or GPU) by unloading the processing of physics code, but will also give developer the ability to perform even more physics calculations in parallel with the CPU and GPU. This solution absolutely has the potential to be more powerful than moving physics processing to the GPU or a second core on a CPU. Not only that, but the CPU and GPU will be free to allow developers to accomplish ever increasingly complex tasks. With current generation games becoming graphics limited on the GPU (even in multi-GPU configurations), it seems counterintuitive to load it even more with physics. Certainly this could offer an increase in physics realism, but we have yet to see the cost.
67 Comments
View All Comments
phusg - Wednesday, May 17, 2006 - link
> Performance issues must not exist, as stuttering framerates have nothing to do with why people spend thousands of dollars on a gaming rig.What does this sentence mean? No, really. It seems to try to say more than just, "stuttering framerates on a multi-thousand dollar rig is ridiculous", or is that it?
nullpointerus - Wednesday, May 17, 2006 - link
I believe he means that the card can't survive in the market if it dramatically lowers framerates on even high end rigs.DerekWilson - Wednesday, May 17, 2006 - link
check plus ... sorry if my wording was a little cumbersome.QChronoD - Wednesday, May 17, 2006 - link
It seems to me like you guys forgot to set a baseline for the system with the PPU card installed. From the picture that you posted in the CoV test, the nuber of physics objects looks like it can be adjusted when the AGIEA support is enabled. You should have ran a benchmark with the card installed but keeping the level of physics the same. That would eliminate the loading on the GPU as a variable. Doing so would cause the GPU load to remain nearly the same with the only difference being to do the CPU and PPU taking time sending info back and forth.Brunnis - Wednesday, May 17, 2006 - link
I bet a game like GRAW actually would run faster if the same physics effects were run directly on the CPU instead of this "decelerator". You could add a lot of physics before the game would start running nearly as bad as with the PhysX card. What a great product...DigitalFreak - Wednesday, May 17, 2006 - link
I'm wondering the same thing."We still need hard and fast ways to properly compare the same physics algorithm running on a CPU, a GPU, and a PPU -- or at the very least, on a (dual/multi-core) CPU and PPU."
Maybe it's a requirement that the developers have to intentionally limit (via the sliders, etc.) how many "objects" can be generated without the PPU in order to keep people from finding out that a dual core CPU could provide the same effects more efficiently than their PPU.
nullpointerus - Wednesday, May 17, 2006 - link
Why would ASUS or BFG want to get mixed up in a performance scam?DerekWilson - Wednesday, May 17, 2006 - link
Or EPIC with UnrealEngine 3?Makes you wonder what we aren't seeing here doesn't it?
Visual - Wednesday, May 17, 2006 - link
so what you're showing in all the graphs is lower performance with the hardware than without it. WTF?yes i understand that testing without the hardware is only faster because it's running lower detail, but that's not clearly visible from a few glances over the article... and you do know how important the first impression really is.
now i just gotta ask, why can't you test both software and hardware with the same level of detail? that's what a real benchmark should show atleast. Can't you request some complete software emulation from AGEIA that can fool the game that the card is present, and turn on all the extra effects? If not from AGEIA, maybe from ATI or nVidia, who seem to have worked on such emulations that even use their GFX cards. In the worst case, if you can't get the software mode to have all the same effects, why not then atleast turn off those effects when testing the hardware implementation? In the city of villians for example, why is the software test ran with lower "Max Physics Debris Count"? (though I assume there are other effects that get automatically enabled with the hardware present and aren't configurable)
I just don't get the point of this article... if you're not able to compare apples to apples yet, then don't even bother with an article.
Griswold - Wednesday, May 17, 2006 - link
I think they clearly stated in the first article, that GRAW for example, doesnt allow higher debris settings in software mode.But even if it did, a $300 part that is supposed to be lightning fast and what not, should be at least as fast as ordinary software calculations - at higher debris count.
I really dont care much about apples and oranges here. The message seems to be clear, right now it isnt performing up to snuff for whatever reason.