Far Cry
Contents
Far Cry is a game based on the new Shader 3.0 (DirectX 9.0c) model, which is used by GeForce 6 and 7 series from NVIDIA and Radeon X1000 series from ATI.
As we’ve done on other programs, we ran this game only at 1024×768. Since we were evaluating low-end video cards, we decided to not run our tests in higher resolutions, since rarely a user that buys a video card from this level will push resolutions above 1024×768 in 3D games.
This game allows several image quality levels and we’ve done our benchmarking on two levels: low and very high. To measure the performance we used the demo created by German magazine PC Games Hardware (PCGH), available at https://www.3dcenter.org/downloads/farcry-pcgh-vga.php. We ran this demo four times and made an arithmetical average with the obtained results. This average is the result presented in our graphs.
This game has a very important detail in its image quality configuration. Antialising, instead of being configured by numbers (1x. 2x. 4x or 6x), is configured as low, medium or high. The problem is that on NVIDIA chips both medium and high mean 4x, while on ATI chips medium means 2x and high means 6x, making the comparison between ATI and NVIDIA chips completely unfair. Because of that we configured antialising at 4x and anisotropic filtering at 8x manually at the video driver control panel. Some very low-end video chips (Volari 8300 and Intel i915G) don’t have antialising feature, so we were not able to benchmark them using this configuration.
For further details on how to measure 3D performance with Far Cry, read our tutorial on this subject.
Running this game in its low video quality mode, Leadtek GeForce 6200 128 MB 128-bit was 56.14% faster than GeForce 6200 TurboCache 64 MB 64-bit (XFX), 66.69% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 67.88% faster than Radeon X300 128 MB 128-bit (ATI), 103.00% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek) and 146.70% faster than Volari 8300 128 MB (XGI).
Leadtek GeForce 6200 128 MB 128-bit was beaten by Radeon X1300 Pro 256 MB 128-bit (ATI), which was 36.36% faster, GeForce 6600 GT 128 MB (NVIDIA), which was 30.04% faster and GeForce 6600 128 MB (Albatron), which was 26.29% faster.
Enabling video quality enhancements, Leadtek GeForce 6200 128 MB 128-bit was 92.05% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 92.63% faster than Radeon X300 128 MB 128-bit (ATI), 106.16% faster than GeForce 6200 TurboCache 64 MB 64-bit (XFX) and 482.01% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek).
Leadtek GeForce 6200 128 MB 128-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 154.64% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 79.47% faster and GeForce 6600 128 MB (Albatron), which was 37.87% faster.