NVIDIA’s fall from grace in the high-end segment has definitely been one of the most closely followed stories of 2003 to this point. When ATI stunned the hardware world with RADEON 9700 PRO last summer, everyone expected NVIDIA to counter with something stronger. After all, this was the company that had used the six-month product cycle to beat the competition into obsolescence with great success. GeForce FX 5800, and more specifically, GeForce FX 5800 Ultra were meant to restore NVIDIA’s presence in this space, but both products have been AWOL since they were announced.
This came as quite a surprise to industry watchers, after years of near flawless execution, not only had NVIDIA missed fall, but they also missed the critical holiday shopping season. As a result, ATI’s RADEON 9700 and RADEON 9500 cards enjoyed the distinction of being the only DirectX 9 solutions on the market and all the fanfare that comes with being on top in performance and features.
NVIDIA's last chance gas demo
As the day passes the sky changes and the sun's position fills the area with shadows
Concept shot of Dawn's twin, Dusk
Just what went wrong with NVIDIA and the GeForce FX 5800 family? In all honesty, the answer depends on who you talk to, but the decision to go with bleeding-edge technologies like DDR2 memory, and TSMC’s relatively untested 0.13-micron manufacturing process are largely considered to be the leading roots of the problem. Essentially NVIDIA took a gamble by incorporating these components in GeForce FX 5800’s design and lost, while ATI played it safe, sticking with TSMC’s more mature 0.15-micron process (which incidentally, is used on a wide range of NVIDIA products) and DDR memory.
These differences in approach resonate throughout the architecture of both products in much the same way AMD’s Athlon XP differs from Intel Pentium 4. Like the Pentium 4, GeForce FX needs its high clock speed to attain optimal performance; RADEON 9700 on the other hand relies on a more balanced approach of additional functional units with a more palpable clock frequency.
NVIDIA must have felt that with GeForce FX 5800’s 125 million transistors, 0.13-micron was more feasible than 0.15-micron: the smaller process keeps GeForce FX 5800’s die size in check, making the chip cheaper to produce. Hitting those astounding clock speeds NVIDIA was shooting for was also more attainable at 0.13-micron. At least, that’s probably what NVIDIA engineers were thinking early on in the chip’s design, in practice we now see that NVIDIA and TSMC have had a difficult time getting sufficient quantities of chips at the clock speeds NVIDIA was shooting for, resulting in a lot of parts that weren’t up to snuff and thus have to be thrown away.
NV35: The graphics inferno
As this story was progressing in the public limelight, NVIDIA had another design team that was quietly plugging away on its follow-up to GeForce FX 5800, internally codenamed “NV35”. Just as GeForce4 was NVIDIA’s “refresh” to GeForce3, NV35 was designed as an update to GeForce FX 5800. NVIDIA has followed this basic strategy since its inception. If you recall NVIDIA’s previous refresh products, they’ve all offered some pretty remarkable performance improvements over their predecessors. Does NV35 live up to this legacy?