Who defines what “real-world” is?
There are so many different variables that go into game performance just within the game itself, not to mention the different configurations within each machine, and even the way those machines are setup (read: BIOS settings) that it’s virtually impossible to tell someone how a given hardware setup or piece of hardware will perform for you. So our question is, what exactly is a “real-world” benchmark? Who defines what this is?
In a flight simulator for instance, is “real-world” testing at 35,000 feet with the graphics card merely rendering the sky, clouds, and any other objects that may be up at that altitude? Or is real-world defined as flying nap-of-the-earth at 200 feet with your fighter jet flying at 600 miles per hour and the graphics card rendering not only the sky and clouds, but individual trees and other terrain objects? (Not to mention the difference in the flight model and how the CPU handles physics of that flight model between these two scenarios.)
Or we can switch genres and go to Quake. Is real-world testing outdoors or indoors? How many enemies on-screen? 5, 10, 15? Is it more real-world to test single-player where you’ve got the CPU handling both AI and physics of perhaps 3-10 enemies shooting at you simultaneously, or hop on a multiplayer server where the CPU no longer has to handle AI but you may have dozens of guys on a map at once?
These are the types of decisions a reviewer must face when testing hardware. As you can see, there are literally countless different combinations out there that can affect performance. When testing hardware it would be very difficult for anyone to test all these combinations, sacrifices must be made in order to get the review online in time for your deadline. Does that mean your test is any more or less of a “real-world” test?
Instead of worrying whether something is real-world or not, instead what we attempt to do with our methodology is create a scenario that’s as difficult as possible, or as close to difficult as possible for the component we happen to be testing at the moment. By running a worse-case scenario we can see how the hardware performs when it’s stressed the most; under less stressful scenarios it should be capable of delivering better performance that hopefully produces an adequate frame rate. (This actually brings us up to another point, what exactly is adequate anyways? For some people maybe 30 FPS is fine, others may want no less than 60. The truthful answer is that it’s completely subjective, what’s considered “adequate” is going to vary from person-to-person and even game-to-game, say for instance a twitch shooter like Quake which demands a higher frame rate than a more strategic shooter like Splinter Cell or Rainbow Six.)
According to [H]ardOCP, "real-world" CPU performance testing means using a variety of AA, AF, resolution and in-game settings to try to average 40fps... or 35fps, or 73fps. Whatever the video card happens to max out at. "Real-world" benchmarks take no notice of your preference to run at a minimum of 60fps, if that's your flavor, or if you prefer lower resolutions and higher refresh rates to make things easy on the eyes, rather than extra eye candy.
You see, the assertion is that the hardware reviewer in question knows what resolution and settings you prefer to play at, and these apparently will always top out the video card in question. With the graphics adapter thus acting as a bottleneck, the latest CPU from Intel, which in other tests is about 10-25% faster than AMD's offering, turns out to be no better in these benchmarks.
We as a community are no doubt truly grateful to our friends and rivals in Texas for pointing out that if your video card is maxed out, differences in CPU speeds can be difficult to tell. We said the same thing in our GeForce 7800 GTX CPU scaling article
last year, where an Athlon 64 3500+ delivered performance similar to an Athlon 64 FX-57 in our benchmarks. Of course, in that article we happened to include results at not just 1600x1200, but also 1280x1024, a common resolution for many LCDs. At that resolution the FX-57 did show a slight performance advantage over the 3500+. In the past we’ve run scaling results for other GPUs as well, including Radeon X800
, and other GPUs.
[H]ardOCP does raise a good point that most people don't run their games at the lowest possible video settings to get the highest framerate. These truly are not the days of Quake any more, we do like our eye candy...