We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.
We got so excited in participating in the [email protected] project that we built as many high performance systems we could running both the SMP and GPU clients. We were very happy with the results until we received our first electricity bill: our energy consumption more than doubled – and we haven’t even had our systems running 24/7 for 30 days! Since we still wanted to contribute as much as we can to [email protected], we decided to go in a quest to find out if there is a way to score lots of points at [email protected] and, at the same time, not going bankrupt. We got all video cards we had available here in our lab to see which one provided the best performance/consumption ratio. Check it out.
If you are not familiar with the [email protected] project, it is a project sponsored by Stanford University for using computers all around the world to make protein folding simulations in order to find cure for certain diseases. You can collaborate with the project by installing a client on your PC and, when your PC is idle, it automatically downloads, calculates and sends the results back to Stanford. Thus they can have the largest supercomputer in the world (made by the collaborative effort from all people participating in the project) without spending a dime.
If you want to collaborate more, you can install and run high-performance clients, like the SMP client (which recognizes more than one CPU or more than one CPU core; the standard client only recognizes one CPU core) and the GPU client (which uses the graphics chip from your video card to do the calculations, a technique called GPGPU, or General Purpose Graphics Processing Unit). These clients will complete the calculations at a faster pace but, on the other hand, will consume more power, increasing your electricity bill. Finding the “perfect” balance between performance and power consumption is the goal of this article.
For every completed work you send back to the university you get a certain number of points. The number of points will depend on the kind of client you have installed (standard, SMP, GPU, etc) and the kind of job you are running. The number of points you receive will be our metric for performance, as most people participating in [email protected] in teams (like ourselves) are interested in achieving the highest possible score.
Now let’s show you the systems we built to collaborate with the project, their performance (i.e., the number of points they were giving us) and their electrical consumption. With this data you can have an idea of how much we were spending to have them turned on 24/7. We will do a lot of investigation on how to decrease consumption and, at the same time, keeping a high score.
But before we present you numbers, you need to understand more about power consumption. We measured consumption with a digital watt meter, which presents results in watts. With this instrument we were measuring the AC consumption of our system. This is not what the system was pulling from the power supply, because the power supply itself consumes and wastes power. The ratio between the power that the system is pulling from the power supply and the power that the power supply is actually pulling from the wall is called efficiency.
The higher efficiency is the better, as you will be wasting less energy. For example, if a certain system is pulling 200 W from the wall that means that your whole computer is pulling 200 W (and you will pay to the electricity company based on this amount) but the components that are connected to the power supply will be consuming less that than. If we take a typical power supply with 80% efficiency, the components would be pulling 160 W.
Suppose you replace your 80% efficiency power supply with another with 88% efficiency. Your system will still be pulling 160 W from the power supply, but your new unit will be pulling less from the wall: 182 W.
So off the bat one way to save on the electricity bill is replacing your power supply with another with higher efficiency. One way to discover your unit’s efficiency is reading the efficiency chart provided by the manufacturer. Another way is reading our reviews, where we measure this.
Watts (W) is the amount of power the equipment is consuming, but the electricity company charges you based on how much energy you are consuming, which is measured in kWh (kilowatt-hour). Energy is the amount of power you consume over time. So one kWh represents 1 kW (1,000 watts) consumed over one hour. Since we are going to assume that we will be running each machine 24/7, we will multiply the amount of watts by 0.72 (24 hours x 30 days / 1,000; the division by 1,000 is necessary to convert Wh into kWh) to have an estimate of the monthly consumption in kWh. Then we can simply multiply this number by the cost of each kWh to have an idea of the monthly cost to run each system 24/7. Of course the cost of electricity varies depending where you live; we are using the value of USD 0.1224800 per kWh, which is the electricity cost in our town on the day we published this article. On top of that we had other charges like franchise fee, green power financing, etc that we are not considering for simplicity.
As for performance, we measured how much each video card or CPU delayed to process 1% of the work load. By multiplying this time to 100, we had how much time each device would take to process the entire work unit. By dividing 86,400 (number of seconds in a day) from this number, we had the maximum number of work units the device can process per day. As we know how many points each work unit is worth, we can find out the maximum score we can expect from the device by multiplying the maximum number of work units the device can process per day by the number of points each work unit will give us. The result is the maximum score this device can give you per day, and this is the number we will be using.
[nextpage title=”Our High-Performance Setup”]
Below you can see all systems we built to run [email protected] Please see previous page for a detailed explanation on consumption, cost and performance/score.
|System #||CPU||Video Card||Motherboard||Memory||HDD||Power Supply|
|1||Core 2 Extreme QX 9650 (3 GHz)||None||MSI G31M3-F||2 GB DDR2-800 Kingston KVR800D2N6/1GB||160 GB||Antec EarthWatts 500 W|
|2||Core 2 Duo E6600 (2.4 GHz)||XFX GeForce GTX 260 640M XXX||Gigabyte GA-P35C-DS3R||4 GB DDR2-800 OCZ||200 GB||PC Power & Cooling Silencer 610|
|3||Phenom 9600||GeForce 9800 GT 1 GB||Sapphire PI-AM2RS780G||2 GB DDR2-800 Kingston KVR800D2N6/1GB||500 GB||Zalma
|4||Phenom 9700||Radeon HD 4870Radeon HD 4850||ASUS M3A32-MVP||2 GB DDR2-800 Kingston KVR800D2N6/1GB||500 GB||OCZ ProXstream 1,000 W|
|5||Core 2 Extreme QX 9770 (3.2 GHz)||GeForce GTX 280GeForce 8800 GTGeForce 8800 GT||EVGA nForce 790i Ultra||2 GB DDR3-2000||500 GB||OCZ EliteXstream 1,000 W|
Now we are going to do a detailed analysis of our systems.
[nextpage title=”Consumption Analysis”]
We measured consumption under three scenarios. First with the system running only the SMP client, then only with the GPU client, and then with both clients at the same time. We wanted to see if it would make sense to run both SMP and GPU clients at the same time.
|System #||Client||Consumption (W)||Monthly Consumption (kWh)||Monthly Cost (USD) *|
|2||GPU + SMP||202||145.44||$ 17.81|
|3||GPU + SMP||213||153.36||$ 18.78|
|5||GPU + SMP||468||336.96||$ 41.27|
* USD 0.1224800 per kWh running 24/7.
Here we found out something very curious. Each GPU client puts at least one of the CPU cores working at 100%. On dual-core CPUs, one of the cores will always be working at 100% load (independently of the number of GPUs you have running [email protected]). On quad-core CPUs, you will have one CPU core per GPU running at 100%. For example, with our Core 2 Duo when we started the GPU client the CPU load went straight to 50%, with one core being idle and the second core being 100% used. Because of this funny behavior, when we put the SMP and GPU clients to run at the same time consumption (and performance) lowered, as both clients were disputing the CPU. On our quad-core system with three video cards, CPU utilization was at 75%, with one CPU core being used per GPU running [email protected]
Notice that except on systems one and six, all other systems had a video card installed and the consumption presented when running the SMP client alone includes the consumption of the installed video cards in idle mode.
See how our Phenom 9600 running the SMP client was consuming more than a GeForce 9800 GT running the GPU client.
Since we were running all systems with the GPU and SMP clients at the same time (except on machine number 4), the estimated monthly cost for running these six machines was of USD 138. Ouch. The estimated daily score was of 30,715 points, with an estimated monthly score of 1,063,590.
On system 1 we were running Debian Linux 4.0 (64-bit), while on the other systems we were running Windows XP SP3 with Catalyst 8.10 drivers for ATI and 177.84 drivers for NVIDIA.
[nextpage title=”Performance Analysis”]
Now we want to see what we were getting in terms of score on [email protected] by running these systems and do some preliminary analysis to see the most efficient configurations we had running. WU stands for work unit. Project is the number of the [email protected] project each client was running at the time we collected our data, which will tell us how many points they will give us for each delivered work unit (click here to see the full table). We put the number of points given for each completed WU in parenthesis. The maximum daily performance is calculated by dividing 86,400 (number of seconds in a day) by the time to complete one work unit and the result multiplied by the points given to each completed work unit for that project.
Our metric for measuring efficiency will be points/kWh, which is calculated by dividing the maximum monthly performance by the monthly consumption in kWh. This index indicates how many points each system produces with each kWh consumed from the wall. So the higher this number, the better.
|System #||Client||Project (Points)||Time to Complete One WU (seconds)||Max. Daily Performance (Points)||Max. Monthly Performance (Points)||Points/kWh|
|2||GPU + SMP||Above||150,000 (CPU), 8,700 (GPU)||5,873||176,190||1,211|
|3||GPU + SMP||Above||84,100 (CPU), 15,400 (GPU)||4,501||135,030||880|
|4||GPU||5651 (388)||14,300 (HD 4850), 14,000 (HD 4870)||2344 + 2394 = 4,738||142,140||459|
|5||GPU||5013 (480)||8,200 (GTX 280), 8,100 (8800 GT), 8,500 (8800 GT)||5,057 + 5,120 + 4,879 = 15,056||451,680||1,422|
|5||GPU + SMP||Above||79,500 (CPU), 8,200 (GTX 280), 8,100 (8800 GT), 8,500 (8800 GT)||17,143||514,290||1,526|
You should understand something very important about [email protected] scoring system. While work units assigned to NVIDIA-based video cards will almost always give you 480 points, the number of points given by work units processed by ATI-video cards and the Playstation 3 console can change quite often. The above results are based on the project that each client was running at the time we made our tests and do not reflect the best scores ATI and PS3 systems can achieve.
Our ATI-based video cards were processing a work unit that gave 388 points, but there are work units that will give 548 points. Our PS3 was processing a work unit that gave 110 points, but there are work units that will give 330 points. The time for completing these units that give more points can be higher, however. Just as an exercise, we compiled the following table for systems four (ATI) and six (PS3) as if they were processing these other kinds of work units that give more points. We are doing this in order to not be accused of being unbiased or someone pointing out this potential flaw in our methodology in the future. For this exercise we will consider that the clients will process each work unit with the same performance, which may not be true in the real work.
|System #||Client||Project (Points)||Time to Complete One WU (seconds)||Max. Daily Performance (Points)||Max. Monthly Performance (Points)||Points/kWh|
|4||GPU||4743 (548)||14,300 (HD 4850), 14,000 (HD 4870)||3,311 + 3,382 = 6,693||200,790||648|
As you can see, even simulating the best performance these systems could achieve, both performance and efficiency were at levels below our other systems.
From the above results we learned interesting things about our systems:
- Our Playstation 3 achieved the lowest efficiency index (although in our simulation above it was more efficient that systems running only the SMP client, if it could only process work units that gives 330 points, which isn’t true), meaning that we were spending too much energy to produce too little points compared to our other systems.
- System five was the most expensive to run, but was also the most efficient, meaning that it was the one that could produce the most points per kWh. On this system it was worthwhile to run the SMP client at the same time as our score and points/kWh index increased.
- On systems two and three it wasn’t worthwhile running the SMP client at the same time with the GPU client: the points/kWh index dropped when we did that.
- System four, which had two ATI high-end video cards, achieved a very low points/kWh index. This was the first system we decided to shut down: it was wasting a lot of energy to produce too little results.
Now we were curious to see if we used mid-range or even low-end video cards we would achieve better performance/power ratios. To do that we tested all video cards we had available.
[nextpage title=”Which Video Card is The Best?”]
After we saw that there is a huge difference in power consumption among different systems, we decided to build a mainstream system and measure the performance and power consumption of all video cards we could get our hands on installed on this system. Maintaining the whole system identical and changing only the video card is the correct way to evaluate video card performance.
The system we built had the following specs: Core 2 Duo E6600 (2.4 GHz), ASUS P5K-E/WIFI-AP Motherboard (Intel P35 chipset), 2 GB DDR2-800 (Kingston KVR800D2N6/1GB), 500 GB hard disk drive (Western Digital Caviar SE16), Zalman ZM-600HP power supply and Lite-On LH-20AIL optical drive. We were running Windows XP SP3 with Catalyst 8.10 drivers for ATI and 177.84 drivers for NVIDIA.
The results you can see below. Please keep in mind that consumption is the AC consumption for the whole system, not only for the video card. NVIDIA cards processed project 5800 (which gives 480 points per completed work unit), while ATI cards processed project 4743 (which gives 548 points per completed work unit).
We sorted the table below from the card with the best performance/kWh index to the worst.
|Video Card||Time to Complete One WU (seconds)||Max. Daily Performance (Points)||Max. Monthly Performance (Points)||Consumption (W)||Monthly Consumption (kWh)||Monthly Cost (USD) *||Points/kWh|
|GeForce GTX 260||5,800||7,150||214,500||214||154.08||$ 18.87||1392|
|GeForce GTX 280||5,500||7,540||226,200||234||168.48||$ 20.64||1342|
|GeForce 8800 GT||8,100||5,120||153,600||160||115.20||$ 14.11||1333|
|GeForce 9800 GT 1 GB||9,300||4,459||133,770||170||122.40||$ 14.99||1093|
|GeForce 8800 GTS||10,100||4,106||123,180||188||135.36||$ 16.58||910|
|Radeon HD 4830||17,800||2,660||79,800||158||113.76||$ 13.93||701|
|Radeon HD 4870||12,600||3,758||112,740||225||162.00||$ 19.84||696|
|Radeon HD 4850||15,100||3,136||94,080||189||136.08||$ 16.67||691|
|GeForce 9500 GT||21,700||1,911||57,330||119||85.68||$ 10.49||669|
|GeForce 8600 GT||25,300||1,639||49,170||126||90.72||$ 11.11||542|
|Radeon HD 3870||21,100||2,244||67,320||178||128.16||$ 15.70||525|
|GeForce 8500 GT||58,900||704||21,120||108||77.76||$ 9.52||272|
|Radeon HD 3450 (64-bit)||288,000||116||3,480||111||79.92||$9.79||43|
* USD 0.1224800 per kWh running 24/7.
The results were quite interesting and we will talk more about them in the Conclusions.
Now that we know a lot more what is going on with our parts, we decided to replace parts our systems. Let’s see what we did and what happened.
[nextpage title=”Fine Tuning our Systems”]
After carefully reading the numbers presented in the previous page, we decided to make the following changes to our setup:
- System 1: Remained as before.
- System 2: Replaced the GeForce GTX 260 with a GeForce 8800 and uninstalled the SMP client. Power consumption dropped from 202 W (145.44 kWh or USD 17.81/month) to 185 W (133.20 kWh or USD 16.31/month). Maximum performance changed from 5,873 points/day or 176,190/month to 5,120 points/day or 153,600 points/month. Our points/kWh index for this machine is now 1,153 (from 1,211).
- System 3: Decommissioned.
- System 4: That was the source of our high consumption because it was using two ATI video cards (Radeon HD 4870 and Radeon HD 4850). We removed the ATI video cards and replaced them with one GeForce 8800 GT
and one GeForce 9800 GT. We also replaced the CPU from a Phenom 9700 to an Athlon X2 4600+, which allowed us to save some watts, as we are not going to run the SMP client. We also replaced the OCZ ProXstream 1,000 W power supply with a Zalman ZM-600HP unit. Here was where we saw a huge difference in consumption and performance. This system as it was before was consuming 430 W (309.60 kWh, USD 37.92) and producing a daily score of 4,738 points and a monthly score of 142,140 points. After the changes, power consumption decreased 41% to 254 W (182.88 kWh, USD 22.40) and our score doubled, going to 9,485 daily points or 284,550 monthly points! Our points/kWh index increased from 459 to 1,555.
- System 5: Even though this machine was the one with the best efficiency index before, it was also the most expensive to run (468 W, 336.96 kWh, USD 41.27), as it had three video cards and a very high-end CPU. So we decided to shut it down and use the two GeForce 8800 cards on the other systems and put our GeForce GTX 280 back in our closet.
- System 6: We decided to use our Playstation 3 only to play games or watch Blu-Ray movies. It was not efficient to keep it running [email protected]
We have now only three systems running [email protected], but consumption is now 560 W (403.20 kWh) from 1,565 W (1,127 kWh), what will cost us USD 49.38/month instead of USD 138/month. That is a 64% decrease.
Or course performance decreased. We will be now making 17,439 points/day or 523,170 points/month from 30,715 points/day or 1,063,590 points/month, so performance decreased 43%.
The beauty, as you can see, is that performance and consumption didn’t decrease at the same proportion. We had before a points/kWh index of 944 – i.e., we were making 944 points at [email protected] for every kWh consumed – which increased to 1,297. So our efficiency increased 38%!
We found out several interesting things in our investigation. Here is a summary:
- From the video cards we analyzed, GeForce 8800 GT is the one that provides the best cost/benefit and best performance/kWh ratios for running [email protected] Of course you will get a higher score with a GeForce GTX 260 or GeForce GTX 280, but they are more expensive and also will consume more. If you think only about the points/kWh ratio (i.e., efficiency), then GeForce GTX 260 is the best: it produces more points per kWh consumed than all other video cards.
- A “weaker” video card won’t necessarily consume less power than a “stronger” one. Just see how GeForce 8800 GTS produces a lower score and consumes more than a GeForce 8800 GT.
- ATI video cards should not be used for running [email protected]: they have a far lower points/kWh ratio compared to NVIDIA cards. A GeForce 8800 GT provides almost double the efficiency of a Radeon HD 4870. If you are building dedicated systems for running [email protected], stick with NVIDIA: you will get a higher score and a lower electricity bill.
- Very low-end video cards like Radeon HD 3450 and GeForce 8500 GT are not efficient to run [email protected] and should be avoided. From the mainstream market GeForce 9500 GT was the one with the best performance and efficiency index (points/kWh), being our recommendation on this segment.
- Running the SMP client together with the GPU client won’t necessarily increase the system performance. In our tests we saw two out of three systems where performance decreased. We found out that for each video card running the GPU client [email protected] will completely use one CPU core (on quad-core CPUs it will use one core per GPU, so with a quad-core CPU and two video cards, two CPU cores will be constantly used; on dual-core CPUs it will use one of the two cores all the time independently of the number of GPUs you have installed). So when you run the SMP client at the same time, both clients will compete for CPU utilization, leading to a lower performance. On the other hand, consumption also decreases.
- If you want to build a system for only running the SMP client, it is more efficient if you run it with a motherboard with on-board video, because the base consumption will be lower as you won’t have a video card installed.
- The Playstation 3 achieved one of the lowest points/kWh ratio, meaning that you will feel an increase on your electricity bill without a meaningful increase in your [email protected] score. We see lots of people praising the math performance of PS3, but this performance isn’t converted in a huge [email protected] score because each PS3 work unit doesn’t give a lot of points.
- If you are building a system to run [email protected], we think that the points/kWh should be your metric for efficiency. Buying a digital watt meter will help you a lot finding what you can change on your setup to have a more efficient system. From our experience you should keep systems with a points/kWh index of at least 1,000. Systems with indexes below that should be reevaluated.
- We tested only the parts we had available in our lab. If we didn’t include part A or B was because we didn’t have it. So please do not post comments like “why didn’t you tested part x?” – which actually mean “hey, can you please evaluate the PC I have at home for free?.” If you are really worried about consumption, buy a digital watt meter. It will also help you find out the consumption of other devices you have at home and see where you can save money.
In the text we explained how to find out the points/kWh index of a system, but here is a more practical summary:
- Measure the consumption with a digital watt meter connected to the system and with your [email protected] client running. The results will be given in watts. Multiply it by 0.72 to have the results in kWh. This formula only works if you are running your system 24/7. If you are running it a different amount of time, the multiplying factor should be the number of hours divided by 1,000 (e.g., for 40 hours a month the multiplier is 0.04).
- Locate on your electricity bill the cost of each kWh. Multiply it by the result found about to have an estimate of how much it is costing you per month to keep that system running.
- Open the Log file of your [email protected] client (FAHlog.txt), scroll down and see how much time it is taking to process each %. Convert the time found to seconds. Multiply this number by 100 to have the amount of time, in seconds, your client will take to process one work unit.
- Scroll down on the log file and locate the project number your client is running. Then locate this project at https://fah-web.stanford.edu/psummary.html and see how many points they will give you for processing a work unit from that project.
- Divide 86,400 by the time, in seconds, your client take to process one work unit. The result will be the number of work units your client can process per day. Multiply the result by the number of points they will give you for each work unit. You will have an estimate of number of points your system can generate per day.
- Multiply the result above by 30 to have the number of points you can generate per month. Divide this number by the monthly consumption from your system in kWh. The final result will be the points/kWh index. We recommend that you only keep systems delivering an index of at least 1,000.