Page 9 - Overclocking
Overclocking & TweakingBefore we dive into an wide-ranging series of tests and benchmarks, we need to explain overclocking. With most videocards, we can do some easy tricks to boost the overall performance a little. You can do this at two levels, namely tweaking by enabling registry or BIOS hacks, or even tamper with Image Quality. And then there is overclocking, which by far will give you the best possible results.
What do we need?
One of the best tool for overclocking NVIDIA and ATI videocards is our own Rivatuner that you can download here. If you own a NVIDIA graphics card then NVIDIA actually has very nice built in options for you that can be found in the display driver properties. They are hidden though and you'll need to enable it by installing a small registry hack called CoolBits, which you can download right here (after downloading and unpacking just click the .reg file twice and confirm the import).
Where should we go ?
Overclocking: by increasing the frequency of the videocard's memory and GPU, we can make the videocard increase its calculation clock cycles per second. It sounds hard but it really can be done in less then a few minutes. I always tend to recommend to novice users and beginners not to increase the frequency any higher then 5-10% of the core and memory clock. Example: If your card would run at 300 MHz then I suggest you don't increase the frequency any higher than 330 MHz.
More advanced users push the frequency often way higher. Usually when your 3D graphics will start to show artifacts such as white dots ("snow"), you should go down 10-15 MHz and leave it at that.
The core can be somewhat different. Usually when you are overclocking too hard, it'll start to show artifacts, empty polygons or it will even freeze. I recommend that you back down at least 15 MHz from the moment you notice an artifact. Look carefully and observe well.
All in all... do it at your own risk.
Overclocking your card too far or constantly to its maximum limit might damage your card and it's usually not covered by your warranty.
You will benefit from overclocking the most with a product that is limited or you may call it "tuned down." We know that this graphics core is often limited by tact frequency or bandwidth limitation, therefore by increasing the memory and core frequency we should be able to witness some higher performance results. A simple trick to get some more bang for your buck.
The reference GeForce 6600 GT AGP at default clock speeds is doing 500 MHz. Its DDR memory is (2x)450, thus 900 MHz. Overclocked it was capable of running at 553 MHz core and 1100 MHz memory frequency.
That high 525+ core frequency is something we observed with all 6600 cards that we have tested to this date, it's just amazing. The non GT's that are running at ~300 MHz at default remain the best overclockers though, as you can gain a 200+ MHz boost. The GT is already pushed towards its theoretical maximum clock frequency.
But to be able to use that high core clock efficiently you need more memory bandwidth and memory-wise the overclock was very good. Overall a nice overclock that will boost the framerates a little higher. Take a good look at the numbers in the benchmarks as you'll see a very nice difference when we enable the overclock.
One small reminder though, our overclocking results are never a guarantee for your results. Manufacturers' choices in components differ and so will the end-results. This however is a good indication of what is possible (or not).
The Test System
Now we begin the benchmark portion of this article, but first let me show you our test system.
- Albatron (AGP 8X enabled)
- Albatron PX915P/G Pro (PCI-Express 16x enabled)
- 1024 MB DDR400
- GeForce 6600/6600GT/6800GT/Radeon X600
- Pentium 4 class 3.6 Ghz (Socket 775)
- Windows XP Professional
- DirectX 9.0c
- ForceWare 66.93 WHQL
- Radeon Catalyst 4.9 for ATI cards
- Latest reference chipset and AGP/ PCI-Express drivers
- RivaTuner 2.0 (tweak utility)
Benchmark Software Suite:
- Far Cry Guru3D config & timedemo
- Splinter Cell (Guru3D custom timedemo)
- Return to Castle Wolfenstein - Checkpoint DM60
- 3DMark03
Remark
Image Quality between ATI and NVIDIA cards really is about equal, yet driver optimizations have made it very hard to do a 100% 1:1 performance comparison. ATI has enabled Trilinear optimizations in their X800 series at default, so we enabled that option for the GeForce Series 6 also.
The Anisotropic Filtering settings that enables themselves in the ForceWare drivers when you enable AF/AA settings have been disabled by us unless noted otherwise to make the benchmarks as objective as they can be for future comparisons.
All tests where made in 32 bit per pixel color in resolutions ranging from 800x600 pixels up to the Godfather of all gaming resolutions: 1600x1200 We also ran all tests with 4X Antialiasing and 8X Anisotropic Filtering where possible.
The numbers (FPS = Frames Per Second) | ||||||||||||||
|
||||||||||||||
|
Note
Before we start with the benchmarks I need to make something very clear to you about the test systems used. The GeForce 6600 GT AGP has a slight disadvantage over PCI-Express as the test systems used differ a tiny bit. As this article really is about the performance difference between the PCI-Express and AGP version I wanted, no needed to do a direct comparison between that line-up of product. The problem then is are the test systems as we have a platform with PCI-Express and AGP.
We took an AGP8x Intel 865PE (socket 478) and an PCI Express Intel 915P (socket 775) motherboard and configured both with equal settings upto the Mhz precise on FSB/DDR and all related settings.
The big difference however is that the PCI-Express system has a 3.6 GHz processor and the AGP system a 3.4 GHz CPU. So you see there is a 200 MHz difference. In reality both systems perform quite close to one another, yet the AGP system is at a small disadvantage that can become apparent with CPU limited games.