[nextpage title=”Introduction”]

GeForce 6200 TurboCache is the latest entry-level graphics chip from NVIDIA for the PCI Express bus, competing directly with Radeon X300 SE HyperMemory from ATI. In this review we will compare the performance of GeForce 6200 TurboCache with 64 MB (“256 MB”) and 64-bit interface from XFX with other GeForce 6200 models with different memory configurations and competing chips from ATI, and also with some mid-range chips (GeForce 6600 and GeForce 6600 GT).

XFX GeForce 6200 TurboCacheFigure 1: GeForce 6200 TurboCache with 64 MB and 64-bit interface from XFX (PV-T44P-QAH).

TurboCache (“TC”) is a technology that simulates a 128 MB or 256 MB video card by using part of the system’s RAM memory as video memory. So, even though this card from XFX is labeled as “256 MB TurboCache” it actually has only 64 MB of video memory. The rest – 192 MB – is stolen from the main RAM memory. HyperMemory technology from ATI uses the same idea. Read our tutorial on TurboCache for more information on this subject.

The whole problem is not only that you can be deceived by the exaggerated “128 MB” or “256 MB”. Each card can use different number of bits on the memory interface. You can find GeForce 6200 TC parts using either 32-bit or 64-bit interfaces. Of course 32-bit parts will achieve half the performance of 64-bit parts when accessing the video memory. Our main concern is that usually manufacturers don’t clearly estate how many bits are being used on the memory bus.

XFX, for example, has seven GeForce 6200 TurboCache models:

  • PV-T44P-QAM: 64 MB, 64-bit, low profile
  • PV-T44P-RAM: 128 MB, 64-bit, low profile
  • PV-T44P-KAH: 32 MB, 64-bit
  • PV-T44P-HAH: 16 MB, 32-bit
  • PV-T44P-KAM: 32 MB, 64-bit, low profile
  • PV-T44P-QAH: 64 MB, 64-bit
  • PV-T44P-JAM: 64 MB, 32-bit, low profile

The number of models with different memory quantity and memory bus width configurations is simply amazing. This causes a great deal of confusion on the market.

The model we reviewed was PV-T44P-QAH, which has 64 MB on-board (and labeled as “256 MB”) and 64-bit memory interface.

To make things even worse, NVIDIA don’t specify a default memory clock for GeForce 6200 TurboCache, i.e., each manufacturer can set the memory clock they want. So you can find on the market models with different memory clock configurations.

So you have to be very careful when buying a GeForce 6200 TurboCache to buy the correct model you want.

We run PowerStrip software to check the clocks used by the reviewed card, and the model we reviewed was running at 350 MHz with its memory being accessed at 500 MHz.

You can see in our tutorial “NVIDIA Chips Comparison Table” the difference between GeForce 6200 TurboCache chip and the other chips from NVIDIA, while on our tutorial “ATI Chips Comparison Table” you can compare it to its competitors from ATI.

Let’s now take a closer look at the GeForce 6200 TurboCache with 64 MB and 64-bit interface from XFX (PV-T44P-QAH).

[nextpage title=”The GeForce 6200 TurboCache PV-T44P-QAH from XFX”]

On Figures 2 and 3 you can check GeForce 6200 TurboCache with 64 MB and 64-bit interface from XFX. As you can see, this video card uses a passive heatsink (i.e., without a fan).

XFX GeForce 6200 TurboCacheFigure 2: XFX PV-T44P-QAH.

XFX GeForce 6200 TurboCacheFigure 3: XFX PV-T44P-QAH, back view.

This video card uses four DDR 128-Mbit 4 ns chips from Samsung (K4D261638F-TC40), making its 64 MB video memory (128 Mbits x 4 = 64 MB). These chips can run up to 500 MHz. Since this video card accesses the memory at 500 MHz there is no room for memory overclocking inside the memory’s specifications. But of course you can try overclock it over its specs.

XFX GeForce 6200 TurboCacheFigure 4: 4 ns DDR memory chip used by XFX PV-T44P-QAH video card.

[nextpage title=”Main Specifications”]

* Researched on Shopping.com on the day we published this review.[nextpage title=”How We Tested”]

During our benchmarking sessions, we used the configuration listed below. Between our benchmarking sessions the only variable was the video card being tested.

Hardware Configuration

  • Motherboard: Intel D915GEV
  • CPU: Pentium 4 3.4 GHz LGA 775
  • Memory: Two 512 MB DDR2-533 CM2X512-4200 CL4 from Corsair memory modules
  • Hard Drive: Maxtor DiamondMax 9 Plus (40 GB, ATA-133)
  • Screen resolution: 1024x768x32@85 Hz

Software Configuration

  • Windows XP Professional installed using NTFS
  • Service Pack 2
  • Direct X 9.0c
  • Intel inf driver version: 7.2.2.1006
  • ATI video driver version: 5.11
  • NVIDIA video driver version: 81.95
  • Intel video driver version: 14.17
  • XGI video driver version: 3.01.130.D (6.14.1.3010)

Used Software

We adopted a 3% error margin; thus, differences below 3% cannot be considered relevant. In other words, products with a performance difference below 3% should be considered as having similar performance.

[nextpage title=”3DMark2001 SE”]

3DMark2001 SE measures video card performance simulating DirectX 8.1 games. It is very effective software for evaluating the performance from previous-generation games, programmed using DirectX 8. In this software we ran two tests, both at 1024x768x32. Since we were evaluating low-end video cards, we decided to not run our tests in higher resolutions, since rarely a user that buys a video card from this level will push resolutions above 1024×768 in 3D games.

We ran this software first without antialising and with no frame buffer, and then we put the antialising at 4 samples and the frame buffer at triple-buffering. This improves the video quality but lowers the performance. We were willing to see how much performance we lost by putting the VGA to run at the maximum possible image quality. It is important to note that ATI chips can run at 6x antialising. Since NVIDIA chips cannot run at this configuration, we had to use 4x antialising to use a configuration that is valid to all video cards included in our benchmark, allowing direct comparison between them. Also, some very low-end video chips (Volari 8300 and Intel i915G) don’t have antialising feature, so we were not able to benchmark them using this configuration.

You may be asking yourself why we added an old program in a review of a latest generation video card. To us, it is as important to know the performance of a video card with the latest games as it is to know its performance in an older game. That’s why we kept this software in our methodology.

ATI Radeon X1300 Pro

At the default 3DMark2001 SE configuration, XFX GeForce 6200 TurboCache 64 MB 64-bit was 24.33% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 24.42% faster than Radeon X300 128 MB 128-bit (ATI), 29.70% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek), 52.69% faster than i915G (Intel D915GEV) and 64.64% faster than Volari 8300 128 MB (XGI)

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by Radeon X1300 Pro 256 MB 128-bit (ATI), which was 67.99% faster, GeForce 6600 128 MB (Albatron), which was 45.23% faster, GeForce 6600 GT 128 MB (NVIDIA), which was 41.02% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 35.40% faster.

ATI Radeon X1300 Pro

Enabling video quality enhancements, XFX GeForce 6200 TurboCache 64 MB 64-bit was 271.95% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by Radeon X1300 Pro 256 MB 128-bit (ATI), which was 354.49% faster, GeForce 6600 128 MB (Albatron), which was 218.13% faster, GeForce 6600 GT 128 MB (NVIDIA), which was 190.78% faster, GeForce 6200 128 MB 128-bit (Leadtek), which was 111.27% faster, Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), which was 55.67% faster and Radeon X300 128 MB 128-bit (ATI), which was 54.99% faster.

[nextpage title=”3DMark03″]

3DMark03 measures performance by simulating games written to DirectX 9, which are contemporary games. In this software we ran two tests, both at 1024x768x32. Since we were evaluating low-end video cards, we decided to not run our tests in higher resolutions, since rarely a user that buys a video card from this level will push resolutions above 1024×768 in 3D games.

We ran this software first without antialising and with no anisotropic filtering, and then we put the antialising at 4 samples and anisotropic filtering at 4 samples. This improves the video quality but lowers the performance. We were willing to see how much performance we lost by putting the VGA to run at the maximum possible image quality. It is important to note that ATI chips can run at 6x antialising. Since NVIDIA chips cannot run at this configuration, we had to use 4x antialising to use a configuration that is valid to all video cards included in our benchmark, allowing direct comparison between them. Also, some very low-end video chips (Volari 8300 and Intel i915G) don’t have antialising feature, so we were not able to benchmark them using this configuration.

ATI Radeon X1300 Pro

At the default 3DMark03 configuration, XFX GeForce 6200 TurboCache 64 MB 64-bit was 13.41% faster than Volari 8300 128 MB (XGI), 23.38% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek), 36.03% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 36.03% faster than Radeon X300 128 MB 128-bit (ATI) and 74.77% faster than i915G (Intel D915GEV).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 228.95% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 129.59% faster, GeForce 6600 128 MB (Albatron), which was 102.73% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 56.04% faster.

ATI Radeon X1300 Pro

Enabling video quality enhancements, XFX GeForce 6200 TurboCache 64 MB 64-bit achived similar perfomace to Radeon X300 128 MB 128-bit (ATI) and Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), and it was was 79.09% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 438.96% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 277.92% faster, GeForce 6600 128 MB (Albatron), which was 198.84% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 107.63% faster.

[nextpage title=”3DMark05″]

3DMark05 measures performance by simulating DirectX 9.0c games, i.e., using the new Shader 3.0 model. This programming model is used by Far Cry game and other games to be launched in the future. This new programming model is used by GeForce 6 and 7 series from NVIDIA and Radeon X1000 series from ATI.

In this software we ran two tests, both at 1024x768x32. Since we were evaluating low-end video cards, we decided to not run our tests in higher resolutions, since rarely a user that buys a video card from this level will push resolutions above 1024×768 in 3D games.

We ran this software first without antialising and with no anisotropic filtering, and then we put the antialising at 4 samples and anisotropic filtering at 4 samples. This improves the video quality but lowers the performance. We were willing to see how much performance we lost by putting the VGA to run at the maximum possible image quality. It is important to note that ATI chips can run at 6x antialising. Since NVIDIA chips cannot run at this configuration, we had to use 4x antialising to use a configuration that is valid to all video cards included in our benchmark, allowing direct comparison between them. Also, some very low-end video chips (Volari 8300 and Intel i915G) don’t have antialising feature, so we were not able to benchmark them using this configuration.

ATI Radeon X1300 Pro

At the default 3DMark05 configuration, XFX GeForce 6200 TurboCache 64 MB 64-bit was 19.48% faster than Radeon X300 128 MB 128-bit (ATI), 19.91% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 31.78% faster than Volari 8300 128 MB (XGI), 36.65% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek) and 284.68% faster than i915G (Intel D915GEV).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 169.27% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 110.82% faster, GeForce 6600 128 MB (Albatron), which was 58.15% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 15.85% faster.

ATI Radeon X1300 Pro

Enabling video quality enhancements, XFX GeForce 6200 TurboCache 64 MB 64-bit was 9.09% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 9.70% faster than Radeon X300 128 MB 128-bit (ATI) and 45.86% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 227.15% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 202.53% faster, GeForce 6600 128 MB (Albatron), which was 99.37% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 46.72% faster.

[nextpage title=”Doom 3″]Doom 3 is one of the heaviest games available today. As we’ve done on other programs, we ran this game at three resolutions: 1024x768x32, 1280x1024x32 and 1600x1200x32. This game allows several image quality levels and we’ve done our benchmarking on two levels, low and high. We ran demo1 four times and wrote the obtained number of frames per second. The first result we discarded at once, since it is far inferior than the other results. This happens because at the first time we run the demo the game must load all textures to video memory, fact that doesn’t happen from the second time we run the demo on. From the three results left, we consider as our official result the middle result, i.e., we discard the highest and the lowest values. Curiously almost all times the values obtained at the second round on were the same.

A very important detail that we must mention is that Doom 3 has an internal FPS lock: it is only capable of generating 60 frames per second, even if your board is able to produce more frames per second than that. This is done in order to make the game to have the same “playability” sensation independently from the video card installed on the PC. This lock, however, is disabled in the game benchmarking mode.

For further details on how to measure 3D performance with Doom 3, read our tutorial on this subject.

ATI Radeon X1300 Pro

Running this game in its low video quality mode, XFX GeForce 6200 TurboCache 64 MB 64-bit was 34.46% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek), 42.14% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 42.14% faster than Radeon X300 128 MB 128-bit (ATI) and 121.11% faster than Volari 8300 128 MB (XGI).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 268.84% faster, GeForce 6600 128 MB (Albatron), which was 180.90% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 127.14% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 117.09% faster.

ATI Radeon X1300 Pro

Enabling video quality enhancements, XFX GeForce 6200 TurboCache 64 MB 64-bit was 31.21% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek), 38.06% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 38.06% faster than Radeon X300 128 MB 128-bit (ATI) and 143.42% faster than Volari 8300 128 MB (XGI).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 288.65% faster, GeForce 6600 128 MB (Albatron), which was 183.24% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 127.57% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 118.38% faster.

[nextpage title=”Far Cry”]

Far Cry is a game based on the new Shader 3.0 (DirectX 9.0c) model, which is used by GeForce 6 and 7 series from NVIDIA and Radeon X1000 series from ATI.
 
As we’ve done on other programs, we ran this game only at 1024×768. Since we were evaluating low-end video cards, we decided to not run our tests in higher resolutions, since rarely a user that buys a video card from this level will push resolutions above 1024×768 in 3D games.

This game allows several image quality levels and we’ve done our benchmarking on two levels: low and very high. To measure the performance we used the demo created by German magazine PC Games Hardware (PCGH), available at https://www.3dcenter.org/downloads/farcry-pcgh-vga.php. We ran this demo four times and made an arithmetical average with the obtained results. This average is the result presented in our graphs.

This game has a very important detail in its image quality configuration. Antialising, instead of being configured by numbers (1x. 2x. 4x or 6x), is configured as low, medium or high. The problem is that on NVIDIA chips both medium and high mean 4x, while on ATI chips medium means 2x and high means 6x, making the comparison between ATI and NVIDIA chips completely unfair. Because of that we configured antialising at 4x and anisotropic filtering at 8x manually at the video driver control panel. Some very low-end video chips (Volari 8300 and Intel i915G) don’t have antialising feature, so we were not able to benchmark them using this configuration.

For further details on how to measure 3D performance with Far Cry, read our tutorial on this subject.

ATI Radeon X1300 Pro

Running this game in its low video quality mode, XFX GeForce 6200 TurboCache 64 MB 64-bit was 6.76% faster than Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), 7.52% faster than Radeon X300 128 MB 128-bit (ATI), 30.01% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek) and 58.00% faster than Volari 8300 128 MB (XGI).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by Radeon X1300 Pro 256 MB 128-bit (ATI), which was 112.91% faster, GeForce 6600 GT 128 MB (NVIDIA), which was 103.04% faster, GeForce 6600 128 MB (Albatron), which was 97.18% faster and GeForce 6200 128 MB 128-bit (Leadtek), which was 56.14% faster.

ATI Radeon X1300 Pro

Enabling video quality enhancements, XFX GeForce 6200 TurboCache 64 MB 64-bit was 182.32% faster than GeForce 6200 TurboCache 16 MB 32-bit (Leadtek).

XFX GeForce 6200 TurboCache 64 MB 64-bit was beaten by GeForce 6600 GT 128 MB (NVIDIA), which was 424.95% faster, Radeon X1300 Pro 256 MB 128-bit (ATI), which was 269.98% faster, GeForce 6600 128 MB (Albatron), which was 184.23% faster, GeForce 6200 128 MB 128-bit (Leadtek), which was 106.16% faster, Radeon X300 SE HyperMemory 128 MB 64-bit (PowerColor), which was 7.34% faster and Radeon X300 128 MB 128-bit (ATI), which was 7.02% faster.

[nextpage title=”Conclusions”]

Both GeForce 6200 TurboCache and Radeon X300 SE HyperMemory are clearly targeted to the very entry-level market based on the PCI Express bus. The idea of these two chips is to be the cheapest video card with better performance than on-board video.

The performance of the reviewed GeForce 6200 TurboCache model from XFX, PV-T44P-QAH – which has 64 MB of video memory and 64-bit memory interface – was greater than the reviewed model from Leadtek, which had only 16 MB of video memory and uses a 32-bit memory interface. So if you want to buy a GeForce 6200 TurboCache, go for the 64-bit memory interface model with at least 64 MB.

In our reviews, the “normal” GeForce 6200 with 128 MB of “real” memory achieved a performance superior than both GeForce 6200 TurboCache boards we reviewed. However, reviewed TurboCache boards accessed memory using a 64-bit interface, while the standard GeForce 6200 we reviewed used a 128-bit interface.

The amazing number of different GeForce 6200 models makes it hard to decide which one to buy. So, if you decide buying a GeForce 6200 prefer buying a “real” GeForce 6200 with 128 MB and 128-bit interface. The problem, however, is that this model costs more. In our opinion, you should go for the TurboCache models only if you have a really tight budget.

Compared to its main competitor, Radeon X300 SE HyperMemory, the results were quite interesting. Only on 3DMark2001 SE increasing image quality settings Radeon X300 SE HyperMemory was faster. On all other tests, GeForce 6200 TurboCache from XFX was faster or tied with Radeon X300 SE HyperMemory from PowerColor. We could claim that Radeon X300 SE HyperMemory is a good video card if you want to play DirectX 8.1 games with high image quality settings on, but in our opinion nobody that owns a low-end video card will enable high image quality settings – usually people will prefer performance than quality when using a low-end VGA.

Since both GeForce 6200 TurboCache with 64-bit memory interface and 64 MB and Radeon X300 SE HyperMemory are in the same price range, we can say that GeForce 6200 TurboCache is a better buy, since it provides a better performance level.

Keep in mind that compared to a GeForce 6200 TurboCache with only 16 MB Radeon X300 SE HyperMemory is faster.

In summary, this model from XFX is a better buy than a Radeon X300 SE HyperMemory and is clearly targeted to people that want to play some games, can’t stand on-board video performance and have only between USD 50 and USD 60 to spend on a video card. If you have a little bit more to spend, consider the “real” GeForce 6200 with 128 MB and 128-bit memory interface.