The Real High-end
Any regular Firingsquad reader is most likely a computer enthusiast who understands the importance of speed in a processor. You're probably the person who requests an OEM CPU with a specific stepping in order to increase your overclocking chances. You may be someone who's willing to spend $50 on a fan to eke out another 33MHz on your PIII-450.
You were willing to shell out the big bucks for the Athlon 700, but would you spend $3000 on a single chip? Do you render full-length movies like Toy Story on your system? Do you have 2GB of ECC SDRAM to play with? If not, then this article may be your first introduction to the stratified and expensive world of high-performance computing, where the only number you can be sure of is the price tag
Different Ideas of Performance
The most recognizable indicator of performance in the PC world is megahertz. Sophisticated readers will realize that this measure is sufficiently flawed to not be useful as anything but a first glance approximation of a system's performance. For example, an Athlon 500 performs on par with a Pentium-III 600. However, would you believe me if I told you that an Athlon 600 gets its butt kicked by a 200MHz IBM processor? True, from a certain point of view.
Measuring performance in the world of high-performance workstations and servers is a completely different game than in the world of personal computers. Many of these high-performance workstations and servers will be used to run only one or two applications for their lifetimes, in which case a number of different factors could drastically change performance.
On a database system that is expected to handle data requests from many areas of a 1 GB database file should have a fast IO system, extremely high memory bandwidth, and a large L1 and L2 cache. On the other hand, a 3D rendering farm will require CPUs with the fastest possible floating point speed and the ability to have massive multi-processing.