Whenever a next-generation GPU such as NVIDIA’s GeForce 7800 GTX is launched, the buzz typically revolves around the card’s new features, and of course its performance. However, there’s also a less talked about concern that the new graphics card is in some ways, too powerful.
What! It’s too fast? How does this happen?
On the software side, even the latest games often lag behind the newest hardware by 6-12 months. This is because hardware cycles are refreshed so quickly today that game developers can’t keep up. Whereas ATI and NVIDIA often replace their latest products every six months, a game can take (at best) 1-2 years to develop, if not longer. By the time a game is developed from start to finish, the 3D graphics landscape may have advanced by a generation or two.
In other cases publishers rehash the same outdated game engine over and over again. This often happens with sports titles which are typically refreshed every year. In some instances games can even be CPU-limited due to poor programming.
But software isn’t the only aspect that can hold back a next-gen card at launch, another culprit is often the CPU.
Quite simply, with next-gen graphics processors delivering 1.5-2X times more performance than their predecessors at launch, the CPU can become a bottleneck in many titles we test with. After all, the clock speed of new CPUs tends to only increase in increments of 100-200MHz; only delivering about 10% more performance with each new processor release.
How can you tell when you’re CPU-limited? You can see this in benchmarks where you hit the same frame rate regardless of graphics settings (such as increasing screen resolution). As soon as a faster CPU is inserted, your frame rates increase.
This is a situation you’d like to avoid if you’ve just plunked down $600 on a brand new 7800 GTX graphics card, as you’re not getting the most from your money (although being CPU-limited does allow you to crank up the image quality settings for “free”, as you won’t get the performance hit usually associated with these setting changes).
In this article, we’ll attempt to break down what clock speed(s) you should shoot for if you plan on upgrading to a GeForce 7800 GTX. Our platform for today is the nForce4-based Athlon 64, as this platform is the solution of choice for most gamers right now.
The processors…a quick refresher
Sorting through AMD’s current Athlon 64 lineup can be a sizeable task. Not only do you have to worry about Athlon 64 versus Athlon 64 FX (and don’t forget the X2), but you also can’t forget AMD’s newer 90-nanometer processors. Besides the newer manufacturing process, these chips bring with them support for 11 of Intel’s 13 SSE3 instructions, lower operating voltage, and contain a tweaked memory controller which supports mismatched DIMM sizes as well as providing support for all DIMM slots to be populated without a performance slowdown. Athlon 64 cores based on this new 90-nm revision go by the name of “Venice” and contain 512KB of L2 cache, just like previous A64 processors.
On the FX side, the 90-nm chip you should be on the lookout for is San Diego. San Diego sports all the same enhancements first introduced with Venice, only as an FX processor, San Diego chips ship with a 1MB L2 cache.
For our testing, we used a Venice-based Athlon 64 3000+ and 3500+, while our 3800+ is based on the older Newcastle core (unfortunately, we weren’t able to snag a Venice-based 3800+). Our 4000+ and FX-55 are based on AMD’s ClawHammer core, while the FX-57 is built on AMD’s San Diego core. Keep in mind that the 4000+ is essentially the FX-53 in disguise, as it sports the same clock speed and cache size, only it ships with a locked clock multiplier.
Finally, we’ve also thrown in a GeForce 6800 GT, so you can see how this popular card scales in comparison to the 7800 GTX. It’s quite possible that there may be some cases where a 6800 GT equipped with a speedy processor may outperform the 7800 GTX with a slower chip, so it’s important to check for that situation.