Far Cry is a game based on the new Shader 3.0 (DirectX 9.0c) model, which is used by GeForce 6 and 7 series from NVIDIA and Radeon X1000 series from ATI.
As we’ve done on other programs, we ran this game only at 1024×768. Since we were evaluating low-end video cards, we decided to not run our tests in higher resolutions, since rarely a user that buys a video card from this level will push resolutions above 1024×768 in 3D games.
This game allows several image quality levels and we’ve done our benchmarking on two levels: low and very high. To measure the performance we used the demo created by German magazine PC Games Hardware (PCGH), available at https://www.3dcenter.org/downloads/farcry-pcgh-vga.php. We ran this demo four times and made an arithmetical average with the obtained results. This average is the result presented in our graphs.
This game has a very important detail in its image quality configuration. Antialising, instead of being configured by numbers (1x. 2x. 4x or 6x), is configured as low, medium or high. The problem is that on NVIDIA chips both medium and high mean 4x, while on ATI chips medium means 2x and high means 6x, making the comparison between ATI and NVIDIA chips completely unfair. Because of that we configured antialising at 4x and anisotropic filtering at 8x manually at the video driver control panel. Some very low-end video chips (Volari 8300 and Intel i915G) don’t have antialising feature, so we were not able to benchmark them using this configuration.
For further details on how to measure 3D performance with Far Cry, read our tutorial on this subject.
Running this game in its low video quality mode, XFX GeForce 6200 TurboCache 64 MB 64-bit was 6.76% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 7.52% faster than Radeon X300 128 MB 128-bit (ATI), 30.01% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek) and 58.00% faster than Volari 8300 128 MB (XGI).
XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by Radeon X1300 Pro 256 MB 128-bit (ATI), which was 112.91% faster, GeForce 6600 GT 128 MB (NVIDIA), which was 103.04% faster, GeForce 6600 128 MB (Albatron), which was 97.18% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 56.14% faster.
Enabling video quality enhancements, XFX GeForce 6200 TurboCache 64 MB 64-bit was 182.32% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek).
XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 424.95% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 269.98% faster, GeForce 6600 128 MB (Albatron), which was 184.23% faster, GeForce 6200 128 MB 128-bit (Leadtek), which was 106.16% faster, Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), which was 7.34% faster and Radeon X300 128 MB 128-bit (ATI), which was 7.02% faster.