• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Hardware Secrets

Hardware Secrets

Uncomplicating the complicated

  • Case
  • Cooling
  • Memory
  • Mobile
    • Laptops
    • Smartphones
    • Tablets
  • Motherboard
  • Networking
  • Other
    • Audio
    • Cameras
    • Consumer Electronics
    • Desktops
    • Museum
    • Software
    • Tradeshows & Events
  • Peripherals
    • Headset
    • Keyboard
    • Mouse
    • Printers
  • Power
  • Storage
Home » EVGA GeForce GTX 680 Video Card Review

EVGA GeForce GTX 680 Video Card Review

[nextpage title=”Introduction”]

The GeForce GTX 680 is the latest and greatest graphics processor from NVIDIA, using a new architecture called “Kepler,” a new manufacturing process (28 nm), a dynamic overclocking technology, and supporting the PCI Express 3.0 specification.

EVGA offers four different video cards based on the GeForce GTX 680; we summarize the difference between them in the table below. (All have 2 GB of GDDR5 memory.) The “SC+ w/Backplate” model comes with a plate protecting the solder side of the printed circuit board, a feature not available on the regular model. The “Hydro Copper” model comes with a cold plate for liquid cooling solutions. We reviewed the plain GeForce GTX 680, which uses the NVIDIA reference clocks.

Video Card Part Number Core Clock Memory Clock (Effective) Memory Transfer Rate MSRP in the U.S.
EVGA GeForce GTX 680 02G-P4-2680-KR 1,006 MHz 6,008 MHz 192.2 GB/s USD 500
EVGA GeForce GTX 680 Superclocked 02G-P4-2682-KR 1,058 MHz 6,208 MHz 198.7 GB/s USD 520
EVGA GeForce GTX 680 SC+ w/Backplate 02G-P4-2684-KR 1,058 MHz 6,208 MHz 198.7 GB/s USD 530
EVGA GeForce GTX 680 Hydro Copper 02G-P4-2689-KR 1,150 MHz 6,300 MHz 201.6 GB/s USD 700

The GeForce GTX 680 is a dynamic overclocking technology that increases the graphic chip’s clock from 1,006 MHz up to 1,058 MHz when more processing power is demanded. Because of the new manufacturing process, the maximum power consumed by the video card dropped from 244 W on the GeForce GTX 580 (which uses the 40 nm manufacturing process) to 195 W, while processing power increased.

In the table below, we compare the main specifications of the video cards included in our review. Prices were researched at Newegg.com on the day we published this review, do not include rebates, and are for the video cards with the configurations listed below (i.e., no factory overclocking). With the launch of the GeForce GTX 680, the price of the GeForce GTX 580 dropped from USD 500 to USD 420, and many users are speculating whether or not AMD will drop the price of the Radeon HD 7970.

Video Card Core Clock Shader Clock Memory Clock (Effective) Memory Interface Memory Transfer Rate Memory Shaders DirectX PCI-E Price
EVGA GeForce GTX 680 1,006 MHz NA 6,008 MHz 256-bit 192.2 GB/s 2 GB GDDR5 1,536 11 3.0 USD 500
GeForce GTX 580 772 MHz 1,544 MHz 4,008 MHz 384-bit 192.4 GB/s 1.5 GB GDDR5 512 11 2.0 USD 420
Radeon HD 7970 925 MHz 925 MHz 5.5 GHz 384-bit 264 GB/s 3 GB GDDR5 2,048 11.1 3.0 USD 530-550

NVIDIA also offers a 3 GB version of the GeForce GTX 580 (currently around USD 490, down from between USD 530 and USD 590 in December 2011), but we didn’t have one to include in our comparison.

You can compare the specs of these video cards with other video cards by taking a look at our “AMD ATI Chips Comparison Table” and “NVIDIA Chips Comparison Table” tutorials.

Today, only the LGA2011 Core i7 processors (“Sandy Bridge-E”) have a PCI Express 3.0 controller. In our Radeon HD 7970 review, we discovered that, at this time, there is no difference between using a PCI Express 2.0 or a PCI Express 3.0 connection. We also discovered that if you are using a high-end video card, the CPU doesn’t affect gaming performance.

Now let’s take a complete look at the EVGA GeForce GTX 680.

[nextpage title=”The EVGA GeForce GTX 680″]

Below we have an overall look at the EVGA GeForce GTX 680, which follows the NVIDIA reference model. It requires two six-pin auxiliary power connectors.

EVGA GeForce GTX 680Figure 1: EVGA GeForce GTX 680

EVGA GeForce GTX 680Figure 2: EVGA GeForce GTX 680

This video card supports up to four video monitors and offers one DVI-I, one DVI-D, one HDMI, and one DisplayPort connector.

EVGA GeForce GTX 680Figure 3: Video connectors

[nextpage title=”The EVGA GeForce GTX 680 (Cont’d)”]

The GeForce GTX 680 uses a cooler with a 63 mm radial fan, located beside the heatsink. The heatsink uses aluminum fins and a copper base.

EVGA GeForce GTX 680Figure 4: Heatsink

EVGA GeForce GTX 680Figure 5: Heatsink

In Figure 6, you can see the video card with its cooler removed. It uses a voltage regulator with four phases for the GPU and two phases for the memory chips. The voltage regulator circuit uses a digital design and is controlled by an RT8802A chip. All coils use ferrite cores and all capacitors are solid.

EVGA GeForce GTX 680Figure 6: EVGA GeForce GTX 680

EVGA GeForce GTX 680Figure 7: Main voltage regulator

The reviewed video card uses eight Hynix H5GQ2H24MFR-R0C GDDR5 chips, each one storing 2 Gbit of data, making the 2 GB of memory available on this video card. Each chip is connected to the GPU through a 32-bit lane, creating the 256-bit datapath that is available. These chips can run u
p to 6 GHz. On this video card, they are accessed at 6,008 MHz, which is already a tiny bit above their labeled clock rate. Of course, you can always try to push the memory clock above its specs.

EVGA GeForce GTX 680Figure 8: Memory chips

Before seeing the performance results, let’s recap the main features of this video card.

[nextpage title=”Main Specifications”]

The main specifications for the EVGA GeForce GTX 680 include:

  • Graphics chip: GeForce GTX 680 running at 1,006 MHz
  • Memory: 2 GB GDDR5 memory (256-bit interface) running at 6,008 MHz QDR (Hynix H5GQ2H24MFR-R0C chips)
  • Bus type: PCI Express 3.0 x16
  • Video Connectors: One DVI-D, one DVI-I, one HDMI, and one DisplayPort
  • Video Capture (VIVO): No
  • Cables and adapters that come with this board: None
  • Number of CDs/DVDs that come with this board: One
  • Games included: None
  • Programs included: Drivers
  • More information: https://www.evga.com
  • Average Price in the U.S.*: USD 500.00

* Researched at Newegg.com on the day we published this review.[nextpage title=”How We Tested”]

During our benchmarking sessions, we used the configuration listed below. Between our benchmarking sessions, the only variable was the video card being tested.

Hardware Configuration

  • CPU: Core i7-3960X (3.3 GHz)
  • Motherboard: Intel DX79SI (0460 BIOS)
  • Memories: 16 GB DDR3-2133/PC3-1700, four G.Skill Ripjaws Z F3-17000CL9Q-16GBBZH memory modules
  • Hard disk drive: Western Digital VelociRaptor WD3000GLFS (300 GB, SATA-300, 10,000 rpm, 16 MB cache)
  • Video monitor: Samsung SyncMaster 305T (30” LCD, 2560×1600)
  • Power Supply: Antec TruePower New 750 W
  • CPU Cooler: Intel Liquid Cooling

Software Configuration

  • Windows 7 Ultimate 64-bit
  • Video resolution: 2560×1600 @ 60 Hz

Driver Versions

  • NVIDIA video driver version (GeForce GTX 680): 301.10
  • NVIDIA video driver version (GeForce GTX 580): 296.10
  • AMD video driver version: Catalyst 12.3
  • Intel Inf driver version: 9.2.3.1022

Software Used

  • 3DMark 11 Professional 1.0.3
  • Aliens vs. Predator + Benchmark Tool
  • Battlefield 3
  • Deus Ex: Human Revolution
  • DiRT3
  • Far Cry 2 – Patch 1.03
  • Media Espresso 6.5
  • StarCraft II: Wings of Liberty – Patch 1.4.3

Error Margin

We adopted a 3% error margin. Thus, differences below 3% cannot be considered relevant. In other words, products with a performance difference below 3% should be considered as having similar performance.

[nextpage title=”StarCraft II: Wings of Liberty”]

StarCraft II: Wings of Liberty is a very popular DirectX 9 game that was released in 2010. Though this game uses an old version of DirectX, the number of textures that can be represented on one screen can push most of the top-end graphics cards to their limits. StarCraft II: Wings of Liberty uses its own physics engine that is bound to the CPU and thus does not benefit from PhysX.

We tested this game at 1920×1200 and 2560×1600. The “Graphics Quality” was set to the “extreme” preset, while the “Texture Quality” was set to “Ultra.” We then used FRAPS to collect the frame rate of a replay on the “Unit Testing” custom map. We used a battle between very large armies to stress the video cards.

GeForce GTX 680

Starcraft II: Wings of Liberty 1920×1200 Difference
GeForce GTX 680 113.6  
Radeon HD 7970 107.8 5%
GeForce GTX 580 101.4 12%

GeForce GTX 680

Starcraft II: Wings of Liberty 2560×1600 Difference
GeForce GTX 680 92.5  
Radeon HD 7970 90.5 2%
GeForce GTX 580 87.6 6%

[nextpage title=”Far Cry 2″]

Released in 2008, Far Cry 2 is based on a game engine called Dunia, which is DirectX 10. We used the benchmarking utility that comes with this game at 1920×1200 and 2560×1600, setting the “Overall Quality” to “Ultra High,” maximizing all image quality settings, adjusting anti-aliasing to “8x,” and running the “Ranch Long” demo three times. The results below are expressed in frames per second and are an arithmetic average of the three results collected.

GeForce GTX 680

FarCry 2 1920×1200 Difference
GeForce GTX 680 125.0  
GeForce GTX 580 104.5 20%
Radeon HD 7970 97.2 29%

GeForce GTX 680

FarCry 2 2560×1600 Difference
GeForce GTX 680 83.6  
GeForce GTX 580 71.1 18%
Radeon HD 7970 70.0 19%

[nextpage title=”Aliens vs. Predator”]

Aliens vs. Predator is a DirectX 11 game that makes full use of tessellation and advanced shadow rendering. We used the Aliens vs. Predator Benchmark Tool developed by Rebellion. This program reads its configuration from a text file. (Our configuration files can be found here.) We ran this program at 1920×1200 and 2560×1600, with “Texture Quality” set at “Very High,” “Shadow Quality” set at “Medium,” anisotropic filtering set at “8x,” and anti-aliasing set at “2x.”

GeForce GTX 680

Aliens vs. Predator 1920×1200 Difference
Radeon HD 7970

70.4

8%
GeForce GTX 680 65.1  
GeForce GTX 580 52.5 24%

GeForce GTX 680

Aliens vs. Predator 2560×1600 Difference
Radeon HD 7970

44.2

12%
GeForce GTX 680 39.3  
GeForce GTX 580 32.5 21%

[nextpage title=”DiRT3″]

DiRT3 is a DirectX 11 game. We measured performance using this game by running a race and then playing it back using FRAPS. We ran this game at 1920×1200 and 2560×1600 with image quality set to “Ultra,” and with anti-aliasing set at “8xMSAA.”

GeForce GTX 680

DiRT3 1920×1200 Difference
GeForce GTX 680 93.6  
Radeon HD 7970 80.7 16%
GeForce GTX 580 73.5 27%

GeForce GTX 680

Dirt 3 2560×1600 Difference
GeForce GTX 680 64.5  
Radeon HD 7970 57.3 13%
GeForce GTX 580 49.9 29%

[nextpage title=”Deus Ex: Human Revolution”]

Deus Ex: Human Revolution is another DirectX 11 game. We used the in-game introduction to measure the number of frames per second, using FRAPS. We ran the introduction in two resolutions, 1920×1200 and 2560×1600, configuring “Antialising Mode” at “MLAA,” “Texture Filtering” at “16x Anisotropic,” “Shadows” at “Soft,” “SSAO” at “High,” and “DOF” at “High.”

GeForce GTX 680 

Deus Ex: Human Revolution 1920×1200 Difference
Radeon HD 7970 161.9 0%
GeForce GTX 680 161.8  
GeForce GTX 580 129.4 25%

GeForce GTX 680

Deus Ex: Human Revolution 2560×1600 Difference
Radeon HD 7970 113.4 4%
GeForce GTX 680 109.5  
GeForce GTX 580 92.7 18%

[nextpage title=”Battlefield 3″]

Battlefield 3 is the latest installment in the Battlefield franchise released in 2011. It is based on the Frostbite 2 engine, which is DirectX 11. In order to measure performance using this game, we walked our way through the first half of the “Operation Swordbreaker” mission, measuring the number of frames per second using FRAPS. We ran this game at 1920×1200 and 2560×1600, maximizing all image quality settings, configuring anti-aliasing as “4xMSAA” and anisotropic filtering at “16x.”

GeForce GTX 680

Battlefield 3 1920×1200 Difference
GeForce GTX 680 68.6  
Radeon HD 7970 60.5 13%
GeForce GTX 580 52.4 31%

GeForce GTX 680

Battlefield 3 2560×1600 Difference
GeForce GTX 680 42.9  
Radeon HD 7970 41.0 5%
GeForce GTX 580 32.1 34%

[nextpage title=”3DMark 11 Professional”]

3DMark 11 Professional measures Shader 5.0 (i.e., DirectX 11) performance. We ran this program at 1920×1200 and 2560×1600, selecting the four graphics tests available and deselecting the other tests available. We used
two image quality settings, “performance” and “extreme,” both at their default settings. The results being compared are the “GPU Score” achieved by each video card.

GeForce GTX 680

3DMark 11 – Performance 1920×1200 Difference
GeForce GTX 680

4854

 
Radeon HD 7970

4047

20%
GeForce GTX 580

3184

52%

GeForce GTX 680

3DMark 11 – Performance 2560×1600 Difference
GeForce GTX 680

2818

 
Radeon HD 7970

2398

18%
GeForce GTX 580

1850

52%

GeForce GTX 680

3DMark 11 – Extreme 1920×1200 Difference
GeForce GTX 680

2951

 
Radeon HD 7970

2484

19%
GeForce GTX 580

1904

55%

GeForce GTX 680

3DMark 11 – Extreme 2560×1600 Difference
GeForce GTX 680

1753

 
Radeon HD 7970

1545

13%
GeForce GTX 580

1154

52%

[nextpage title=”Media Espresso 6.5″]

Media Espresso is a video conversion program that uses the graphics processing unit of the video card to speed up the conversion process. We converted a 449 MB, 1920x1080i, 18,884 kbps, MPEG2 video file to a smaller 640×360, H.264, .MPG4 file for viewing on a portable device such as an iPhone or iPod Touch. We also ran this test on our CPU (Core i7-3960X) in order to compare the difference in performance of using a high-end CPU and a high-end GPU to transcode video.

GeForce GTX 680

Media Espresso 6.5 Seconds Difference
GeForce GTX 680

29

 
Radeon HD 7970

30

3%
GeForce GTX 580

42

31%
Core i7-3960X 46 37%

[nextpage title=”Conclusions”]

It is amazing how the video card market changes so fast. Last December, we were impressed with the Radeon HD 7970 and said it was the fastest single-GPU video card ever released. But three months later, NVIDIA takes the crown from AMD: the GeForce GTX 680 is faster and cheaper than the Radeon HD 7970, and it is our current recommendation if you are looking for the latest and greatest video card, and have USD 500 to spend on one. Of course, for the average user there are several more cost-effective options out there.

In our tests, the GeForce GTX 680 was up to 29% faster than the Radeon HD 7970. The only exceptions were in Aliens vs. Predator, where the competitor from AMD was between 8% and 12% faster, and in Deus Ex: Human Revolution, where the two cards achieved the same performance at 1920×1200, with the Radeon HD 7970 slightly faster at 2560×1600. Compared to its predecessor, the GeForce GTX 580, the new model, was between 12% and 55% faster, depending on the program.

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.

car service

Why Is Fleet Maintenance Important?

If you have a fleet of vehicles you use within your business, it’s crucial you keep up with their

Playing Fifa on Play station 4

Tips for Recycling Your Gaming Consoles and Devices

These days, it seems like almost everybody is gaming. As great as this is, it’s also creating a

Business planning

How to Develop Your Venture Capital Business

Venture Capital (VC) is a type of private equity investment in which investors provide funding to

Footer

For Performance

  • PCI Express 3.0 vs. 2.0: Is There a Gaming Performance Gain?
  • Does dual-channel memory make difference on integrated video performance?
  • Overclocking Pros and Cons
  • All Core i7 Models
  • Understanding RAM Timings

Everything you need to know

  • Everything You Need to Know About the Dual-, Triple-, and Quad-Channel Memory Architectures
  • What You Should Know About the SPDIF Connection (2022 Guide)
  • Everything You Need to Know About the Intel Virtualization Technology
  • Everything You Need to Know About the CPU Power Management

Copyright © 2023 · All rights reserved - Hardwaresecrets.com
About Us · Privacy Policy · Contact