AMD Contest Group
Final Round FiringSquad-Intel Edito...
Round 3 Editors Challenge Sponsored...
Top 10 Challenge Round Sponsored by...
Editors Challenge Sponsored by Inte...
FiringSquad Editors Challenge Round...
Lord Of The Rings Online Beta Conte...
FiringSquad Editors Challenge Round...
||17 entry(ies) in this category
| The Odd Couple: Intel Core 2 Extreme X6800 and GeForce 7600GT (19 comments )|
by: CanadaDave (303) | Posted in cluster Final Round FiringSquad-Intel Editors Challenge
Posted 73 months ago ( edited 73 months ago ) in category DEFAULT
In the first article of this three part series, the test-bed system had been built up to the point of reaching the operating system desktop. Now it was time to begin putting the Core 2 Extreme X6800-based system to the test.
|» MEDIA (18)|
Figure 1 - Advanced BIOS Screenshot
Figure 2 - Intelligent Tweaking Menu
Figure 3 - Advanced Options Enabled
Figure 4 - Peak Overclock Settings
Figure 5 - 3DMark GPU
Figure 6 - 3DMark CPU
Figure 7 - Sandra FPU Chart
Figure 8 - Sandra Multimedia Chart
Figure 9 - WinRAR Chart
Figure 10 - TMPGEnc Encoding Chart
Figure 11 - OggDrop Encoding Chart
Figure 12 - Company of Heroes Chart
Figure 13 - Supreme Commander Chart
Figure 14 - Unreal Tournament 2004 Chart
Figure 15 - Vista Company of Heroes Chart
Figure 16 - Vista TMPGEnc Encoding Chart
Figure 17 - Vista WinRAR Performance Chart
Figure 18 - Linux Unreal Tournament 2004 Chart
Motherboards are generally configured by the factory with overly conservative settings in order to ensure that the newly built system will boot up properly. The difference between ‘conservative’ and ‘aggressive’ settings can be very substantial, so the initial task prior to benchmarking was to be sure that the motherboard was configured optimally for our needs. A trip into the BIOS menu showed the usual settings, as well as some features specific to the CPU:
(See Figure 1 - Advanced Settings BIOS Screenshot)
Due to the aggressive system settings that will be used in the test-bed system, all "safeguard" and "memory saving" features of the motherboard have been disabled. These include CPU features such as EIST (which will lower the processor speed during idle times) and TM2 (Intel's thermal monitoring technology). While these features are generally very desirable, the intention of this article is to push the CPU to the limit: All other considerations are secondary.
(See Figure 2 - Intelligent Motherboard Tweaking Screenshot)
Gigabyte, in going the extra mile in an effort to protect users from themselves, chose to hide all but the most basic performance settings from the default view in BIOS, even under the "Intelligent Tweaking" section. After pressing CTRL-F1, the desired settings were displayed, though in some cases, the naming choices from Gigabyte were less than descriptive.
(See Figure 3 for advanced tweaking options)
The advanced options that the Gigabyte 965P-DS3 has are quite comprehensive, with the user being allowed access to several voltage levels, bus speed and (if applicable) multiplier settings, and memory timings. Two vaguely-worded selections in this menu are of particular interest: “Robust Graphics Booster” and “Memory Performance Enhance”. Both of these options appear to be methods of slightly overclocking their corresponding components, but no specific description of those settings exists in the manual. Tomas Lee, who is Marketing Manager for Gigabyte USA, explained that both of those features are actually overclocking options. He stated:
The Robust Graphics Booster will allow the chipset and GPU to synchronize together in maximizing the GPU Clock speed in order to enhance the graphics performance, kind of like NVIDIA LinkBoost Technology.
Memory Performance Enhance will have our certified memory modules, e.g. Corsair, Crucial and Kingston, overclocked to maximum speed for the best performance.”
With that in mind, experienced overclockers may wish to disable both of these settings, and manually increase frequency and voltage settings, as there is less control over the specific amounts of overclocking being undertaken with this type of all-purpose option. Thanks to Tomas from Gigabyte for his response!
Also of interest to overclockers, Gigabyte has implemented an automatic voltage control feature in the 965P-DS3. This feature senses voltage required by the various components based on the settings chosen (multiplier, bus speed, etc), and adjusts on the fly to give the components what they need. In practice, this feature did function as intended, raising the CPU VCore setting as the processor frequency increased. These voltage settings were only slightly less successful than manual overclocking settings, reaching a maximum of 3.45GHz, versus a manual 3.52GHz. While this setting is likely of limited appeal to experienced overclockers due to a slightly lower success rate and – presumably – fears of the board overvolting the CPU, this feature continues Gigabyte’s “safer tweaking” mantra by helping to avoid either too much or too little voltage being fed to components by users making a mistake in the manual settings.
The Core 2 Extreme X6800 processor's stock frequency is 2.93GHz. Following the norm for Intel’s Extreme line, the multiplier has been unlocked, giving the user the opportunity to modify both the multiplier setting as well as the FSB. This allows for substantially more flexibility in overclocking compared to a standard Intel processor, which allow access only to the bus speed.
The initial goal of the overclocking process was to reach the highest CPU frequency possible, then to find the best performance point for the system as a whole. Remember that increasing the bus speed when overclocking the processor also increases the frequency at which other components (memory, for example) that share the bus will run. In other words, while increasing the CPU multiplier provides an increase in CPU MHz, the other components (memory, for example) remain at stock speeds. All things being equal, therefore, the system will perform at a higher level overall if the bus speed is increased to achieve the same overall CPU speed than if the multiplier alone is raised – though system stability is much harder to ensure.
With the above in mind, the multiplier setting was raised to 12, and the system was able to proceed through the synthetic benchmark suites without issue, and at the standard voltage levels. Moving the multiplier to 13 brought the CPU to a frequency of 3458MHz, though the system became unstable without a voltage increase to 1.4V.
An increase of the multiplier to 14 resulted in the system being unable to POST at all, regardless of voltage settings employed. With the CPU therefore being within 266MHz of its maximum speed, the bus speed setting was increased. Very quickly, however, the system ran up against a brick wall: the bus speed setting could only be increased to 271MHz before the system refused to POST. Reducing the memory multiplier to lower the frequency of the memory allowed only an increase of 2MHz in bus speed (for a total of 3549MHz), but the system would no longer boot Windows reliably. The stability threshold had therefore been reached: the Core 2 Extreme X6800 in the test-bed system will run happily at a maximum speed of 3523MHz.
(See Figure 4 - Peak Multiplier and Bus Speed Settings)
With the CPU at 3523MHz, and the memory up to a fairly conservative 817MHz, the entire suite of benchmarks was run. Stability was flawless, and the CPU reached a maximum operating temperature of 56 degrees Celsius during the test. This is a positive result, given that Intel’s rather conservative thermal guidelines specify a maximum temperature of 60.4 degrees C for the processor. With the benchmarks having been run through successfully, it was safe to confirm that this was an acceptable level at which to run the system.
The CPU being only one component in the system, attention was turned to the memory itself. Reducing the CPU multiplier back to its default setting of 11, memory timings were immediately reduced to 5-5-5-15 to first see what the highest frequency was that the OCZ Platinum could attain under “worst case” conditions. The bus speed setting was turned up to 300MHz (900MHz memory at a multiplier of 3x), and the system refused to POST, regardless of voltage level applied to the memory. Setting the memory voltage back to default, the system would post at a maximum memory setting of 843MHz with the timings being relaxed as far as the system would allow. Increasing the voltage actually worsened this situation – with a .1V increase in voltage, the system would no longer POST.
This therefore resulted in a potential CPU setting of (281MHz bus speed * 12X = 3372MHz), a level which produced benchmarks a couple of percentage points lower than the 13x271MHz setting.
The remaining system component to be tweaked – and one of the most important – was the graphics card. The “Graphics Booster” option was disabled, and the graphics card was able to be overclocked to a fairly substantial 619MHz core and 805MHz memory. This was a reasonable increase from the 560MHz stock core and 700MHz memory settings which are default on the card.
When turning up the “Graphics Booster” option in the BIOS to “Fast”, the GeForce 7600GT was bumped up to 589MHz core, and 722MHz memory. Changing the BIOS setting to “Turbo” ratcheted up the speed to 617MHz core, and 736MHz memory – a substantial core increase, though memory speed was only nudged up slightly. This setting therefore provides little more than an easy way to overclock the graphics card without having to use the BIOS – interesting, though of less use to anyone familiar with system tweaking.
As video card settings were able to be raised to a higher level by hand than in the BIOS, the “Graphics Booster” option remained disabled for the duration.
Note – as the 7600GT will represent a bottleneck in the system benchmarks, the graphics card remained overclocked, with 619MHz core and 805MHz memory settings in place for all benchmarks.
The benchmark system as outlined in the first part of this series:
Intel Core 2 Extreme X6800
Gigabyte 965P-DS3 motherboard
Gigabyte 7600GT Graphics Card
2x1GB OCZ Platinum memory (4-5-4-15 timings)
Western Digital 500GB Hard drive
Pioneer DVR-710 DVD Writer
To obtain comparison benchmark figures, a Pentium D 805 processor was also put into the test-bed system to generate performance numbers relative to the Core 2 Extreme X6800. All benchmarks below will include both the X6800 at stock speed, as well the highest speed at which it was able to be run while maintaining stability, which was 3.52GHz.
Synthetic Benchmarks – 3DMark 2006 and SiSoft Sandra XI
FutureMark’s 3DMark06’s GPU test was used as a way to specifically test the performance of the Gigabyte GeForce 7600GT, with little impact from other system components. 3DMark’s CPU component of the test primarily pushed the Core 2 Extreme processor, but also relies on the speed of the OCZ Platinum memory. SiSoft Sandra concentrates more specifically on the CPU itself, performing both integer calculation and floating point tests. Both of these tests are synthetic in nature, and though occasionally manipulated by driver tweaks, are commonly-cited representations of the raw performance of their target components.
(See Figure 5 – 3DMark06 GPU Chart)
While there is clearly still a small tie-in to processor speed, the GPU score in 3DMark was clearly focused on the graphics card. While no other cards were available as a point of reference for this test, scores in the range shown indicate a graphics core performing at a lower-mainstream level according to reference points found on FutureMark’s website.
(See Figure 6 – 3DMark06 CPU Chart)
The X6800 showed a strong improvement over the Pentium D 805 in the CPU portion of the FutureMark test, as expected. While this benchmark is of little value in predicting real-world performance, it does show the relative performance levels between these two processors.
(See Figures 7 and 8 – SiSoft Sandra XI Charts)
SiSoft Sandra’s two CPU benchmarks showed similar results to the 3DMark test, though with a more pronounced result. In both cases, the Core 2 Extreme performed far better than the Pentium D 805, despite the small differential in clock speeds.
Application and Encoding Benchmarks – WinRAR, TMPGEnc and Ogg Vorbis (OggDrop)
WinRAR was used to compress an 800 meg AVI file using its “normal” compression setting. The fact that WinRAR is multi-threaded serves well to showcase processors with multiple cores.
(See Figure 9 – WinRAR Chart)
The Core 2 Extreme extends its dominance in this test, roughly halving the time it took to compress the AVI file under WinRAR. This speaks volumes about the architectural improvements of Core over Netburst: This real-world doubling of performance comes despite the slim 8% difference in clock speeds between the processors.
Long a sweet spot for Intel processors, TMPGEnc is a popular video encoding application. In this instance, a 750 megabyte AVI file was converted to MPEG-2 using the pre-built DVD template setting in the application wizard.
(See Figure 10 – TMPGEnc Chart)
Continuing the experience of the previous benchmarks in this series, the Core 2 Extreme provides double the performance of the Pentium D 805. MPEG-2 conversion under TMPGEnc, which used to take far longer than the length of the video being converted, completes in less than a third of the playback time when the X6800 processor is given the job.
OggDrop is a very small application which converts audio files into the open Ogg format. The application was used to convert a 32mb WAV file to Ogg Vorbis at 128kbps bitrate, with the default –q4 option enabled.
(See Figure 11 – OggDrop Ogg Vorbis Encoding)
Ogg Vorbis conversion similarly took place far more quickly on the X6800 than the Pentium D805, showing the advantages inherent both in the Core architecture, and the larger L2 cache.
Gaming Benchmark 1 – Company of Heroes
Using the benchmark found within the game, Company of Heroes was tested at 1024x768 resolution (antialiasing off, post processing on) with all other settings set to High. Though a higher resolution would certainly be worthy of the Core 2 Extreme, the lowest available resolution was chosen as a setting which the graphics card could handle without bottlenecking the processor too severely.
(See Figure 12 – Company of Heroes Chart)
As is evident in the chart above, the graphics card did become a bottleneck in the process even prior to system overclock. The performance gain from the Pentium D to the Core 2 Extreme was noticeable, but not commensurate with what would be expected from such a large architectural leap.
Gaming Benchmark 2 – Supreme Commander
One game which stresses both processor performance and the graphics card, Supreme Commander’s RTS environment uses extensive collision detection, has an advanced and involved AI, and can have enormous numbers of units on screen at the same time. Supreme Commander even goes to the level of doing individual projectile management (flight path, etc), which for obvious reasons results in an enormous number of calculations during gameplay.
(See Figure 13 – Supreme Commander Chart)
The situation in this benchmark is even worse, as the 7600GT core is completely overwhelmed by the swarm of activity found in the in-game benchmark sequence. In fact, the X6800 only reached peak CPU usage twice in the entire suite of benchmarks – on just one of its cores. In a game this intensive, this is far from an optimal situation. The Pentium D was indeed overly taxed by the game, coming in further below the performance limitations of the 7600GT.
Gaming Benchmark 3 – Unreal Tournament 2004
Unreal Tournament 2004 has long been known as a title which has lower graphics card requirements, instead relying on the computational horsepower of the CPU. Nevertheless, graphics settings were turned down to the modest “High Performance” setting selected in UMark to ensure that the graphics card would be removed from the equation as much as possible.
(See Figure 14 – Unreal Tournament 2004 Chart)
Here, at last, the Core 2 Extreme X6800 is free to stretch its legs, turning in an impressive 216 FPS in the Torlan map. The Pentium D 805 provided an acceptable frame rate of 88.84 FPS, but this lagged far behind the X6800. Of note, the benchmark was run at 1024x768, 1280x1024 and 1600x1200 resolutions, with frame rate remaining constant in each test.
As drivers continue to mature in Windows Vista, it made sense to see if there was any closing of the performance gap between Windows XP and Windows Vista which has been widely reported. Manufacturers are certainly jumping aboard the Vista bandwagon: NVIDIA, for their part, has been releasing Vista drivers at a very rapid pace in an effort to deal with driver quality complaints. To that end, NVIDIA released a new set of drivers on 2-May-2007, superseding the version which had been released less than two weeks prior.
To fairly test Windows XP performance against Windows Vista, both operating systems were run in their “clean” state, with no antivirus. In addition, the Desktop Sidebar icons which are activated under a default Windows Vista installation were all deactivated.
The following chart shows the performance differences between all applications running both in Windows XP and Windows Vista at stock speeds. The applications in the chart were specifically chosen for their single set of binaries for both Vista and XP, in order to eliminate code variations between the two platforms.
(See Figures 15 through 17 – Vista Performance Charts)
The chart demonstrates that Vista’s operating system overhead (and, potentially, less mature drivers) continues to cost gamers valuable performance in an apples-to-apples comparison with Windows XP. There are many reasons for this, not the least of which is that Vista simply has a much bigger system footprint than XP, both in terms of memory footprint and desktop-driven CPU activity. Equally, manufacturers have had six years to get accustomed to driver development in Windows XP (which changed very little from Windows 2000), whereas Vista driver authoring is still a relatively new skill. The end result is that, when running a single application instance and looking for raw performance, Windows XP continues to be the fastest of the Microsoft-supported operating systems.
Ubuntu Linux Gaming Performance
The Linux gaming scene continues to be very small, though as Linux gains traction, more users are investigating its possibilities. NVIDIA, presumably noticing this trend, continues to update their Linux drivers at a solid pace, the latest of which was released on March 7.
The most common Linux real-world game benchmark continues to be the venerable Unreal Tournament 2004. While Epic has released a native Linux version of this popular game, the reader is cautioned against drawing a direct comparison between Windows and Linux scores due to significant variations in the code base between the two versions. For this test, NVIDIA driver version 1.0-9755 was used.
(See Figure 18 – Linux Unreal Tournament 2004 Chart)
At over 70fps for the 1600x1200 setting, the test-bed system is able to provide a fully playable game of Unreal Tournament 2004. While the processor remained below 80% usage at any point during this test, the 7600GT is able to maintain a playable framerate.
Having put the system through its paces, a lot of information was gleaned about the various components. The final installment of this three-part series will summarize the experience with each individually, as well as discuss the specific lessons learned by the experience of configuring and benchmarking the test-bed system.
|19 User Comment(s) • 10 root comment(s)|
» Note: You need to be logged in to write a comment!Login here, or if you don't have an account with FiringSquad, register here, it's FREE!
My Media-Blog categories