How much power do the
best graphics cardsuse? It's an important question, and while the performance we show in our
GPU benchmarkshierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We're here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Wattpower meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We've used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we're doing things the right way. And by "right way," we mean measuring in-line power consumption using hardware devices. Specifically, we're using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image
1
of
2
Tom's Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering is required, and we have a list of the best soldering irons to help — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We've done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We've updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
Tested GPUs
AMD GPUs:
Radeon RX 6900 XT 'reference'
Radeon RX 6800 XT 'reference'
Radeon RX 6800 'reference'
Radeon RX 6700 XT 'reference'
Radeon RX 5700 XT 'reference'
Radeon RX 5700 'reference'
Sapphire RX 5600 XT Pulse
Sapphire RX 5500 XT 8GB Pulse
Sapphire RX 5500 XT 4GB Pulse
AMD Radeon VII 'reference'
AMD Radeon RX Vega 64 'reference'
AMD Radeon RX Vega 56 'reference'
XFX RX 590 Fatboy
Sapphire RX 580 8GB Nitro+ LE
MSI RX 570 4GB Gaming X
MSI RX 560 4GB Aero
Radeon R9 Fury X
Sapphire R9 390 Nitro
Nvidia GPUs:
GeForce RTX 3090 FE
GeForce RTX 3080 FE
GeForce RTX 3070 FE
GeForce RTX 3060 Ti FE
EVGA GeForce RTX 3060 12GB
GeForce RTX 2080 Ti FE
GeForce RTX 2080 Super FE
GeForce RTX 2080 FE
GeForce RTX 2070 Super FE
GeForce RTX 2070 FE
GeForce RTX 2060 Super FE
GeForce RTX 2060 FE
EVGA GTX 1660 Ti XC
EVGA GTX 1660 Super
Zotac GTX 1660 Amp
Zotac GTX 1650 Super Twin
EVGA GTX 1650 GDDR6
Gigabyte GTX 1650 Gaming OC
GeForce GTX 1080 Ti FE
GeForce GTX 1080 FE
GeForce GTX 1070 Ti FE
GeForce GTX 1070 FE
GeForce GTX 1060 6GB FE
Zotac GTX 1060 3GB
MSI GTX 1050 Ti Gaming X
MSI GTX 1050 Gaming X
GeForce GTX 980 Ti
GeForce GTX 980
Zotac GeForce GTX 970
EVGA GeForce GTX 780
We're using our standard graphics card testbed for these power measurements, and it's what we'll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There's a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we've tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII,
Vega 64and
Vega 56. AMD doesn't do 'reference' models on most other GPUs, so we've used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running 'factory stock,' meaning there's no manual
overclockingor
undervoltingis involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We're only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it's a realistic view of the sort of power use you'll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we've grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia's GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That's particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It's pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you're still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we're missing many models from these generations of GPU. Perhaps the less said, the better, so let's move on.
GPU Power with FurMark
FurMark, as we've frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium's Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Image
1
of
10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we're not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn't really have anything to compete against Nvidia's top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that's probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD's GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that's pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That's thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that's more of an issue with the hardware not throttling to stay within spec. Anyway, it's great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It's worth noting that we're not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It's not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here's the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
Swipe to scroll horizontally
Graphics Card | GPU FPS (9 Games) | GPU Power (Watts) | Efficiency Score |
---|---|---|---|
RX 6800 | 130.8 | 235.4 | 100.0% |
RTX 3070 | 116.6 | 219.3 | 95.7% |
RX 6700 XT | 112.0 | 215.5 | 93.5% |
RTX 3060 Ti | 106.3 | 205.5 | 93.1% |
RTX 3060 12GB | 83.6 | 171.8 | 87.6% |
RX 6900 XT | 148.1 | 308.5 | 86.4% |
RX 5700 | 78.4 | 165.8 | 85.1% |
RX 6800 XT | 142.8 | 303.4 | 84.7% |
GTX 1660 Super | 57.9 | 124.2 | 83.9% |
GTX 1660 Ti | 57.8 | 124.0 | 83.8% |
RTX 2080 Ti | 118.2 | 259.4 | 82.0% |
RX 5600 XT | 71.1 | 158.3 | 80.8% |
GTX 1650 GDDR6 | 36.4 | 83.3 | 78.7% |
RTX 2080 | 95.5 | 219.0 | 78.5% |
RTX 2060 Super | 77.2 | 177.3 | 78.4% |
RTX 2060 | 68.5 | 159.2 | 77.5% |
RTX 3080 | 142.1 | 333.0 | 76.8% |
RTX 2070 Super | 91.0 | 213.3 | 76.8% |
GTX 1650 Super | 43.5 | 102.3 | 76.4% |
RTX 3090 | 152.7 | 361.0 | 76.1% |
Titan RTX | 121.4 | 287.7 | 75.9% |
Titan V | 104.9 | 249.4 | 75.6% |
GTX 1660 | 50.1 | 119.2 | 75.6% |
RTX 2080 Super | 102.1 | 246.8 | 74.4% |
RTX 2070 | 81.0 | 196.1 | 74.4% |
RX 5700 XT | 87.1 | 215.1 | 72.9% |
GTX 1050 Ti | 24.5 | 61.0 | 72.3% |
GTX 1070 | 56.1 | 141.7 | 71.3% |
GTX 1650 | 31.9 | 82.5 | 69.6% |
GTX 1080 | 69.1 | 180.4 | 68.9% |
Titan Xp | 93.3 | 249.5 | 67.3% |
RX 5500 XT 8GB | 48.6 | 133.7 | 65.4% |
GTX 1070 Ti | 63.9 | 175.7 | 65.4% |
GTX 1080 Ti | 88.2 | 246.8 | 64.3% |
GTX 1060 6GB | 40.4 | 115.0 | 63.2% |
GTX 1050 | 18.6 | 54.7 | 61.2% |
Radeon VII | 89.9 | 266.7 | 60.7% |
RX 5500 XT 4GB | 43.3 | 133.1 | 58.5% |
GTX 1060 3GB | 34.0 | 108.6 | 56.4% |
RX Vega 56 | 65.3 | 210.5 | 55.8% |
RX 560 4GB | 19.1 | 65.1 | 52.9% |
RX Vega 64 | 74.0 | 297.0 | 44.8% |
RX 570 4GB | 38.5 | 163.1 | 42.5% |
GTX 980 | 40.4 | 173.2 | 41.9% |
GTX Titan X | 53.9 | 232.1 | 41.8% |
GTX 980 Ti | 50.3 | 219.3 | 41.3% |
RX 590 | 49.4 | 219.2 | 40.6% |
GTX 970 | 33.8 | 150.4 | 40.4% |
RX 580 | 47.2 | 214.2 | 39.6% |
R9 Fury X | 50.0 | 261.1 | 34.4% |
R9 390 | 41.5 | 263.6 | 28.3% |
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It's a telling look at how far behind AMD was, and how far it's come with the latest Big Navi architecture.
Efficiency isn't the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD's Big Navi GPUs and Nvidia's Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia's GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you're still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
Jarred Walton
Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
More about gpus
Latest
77 CommentsComment from the forums
King_V Thanks for this.
I have to admit, and maybe it's just the cooling limitation, but I did not at all expect the Vega 56 and Vega 64 to measure pretty much exactly what their official TDP numbers of 210W and 295W are.
and I may add a few earlier cards when I get some time
Don't taunt me like this, LOL
Reply
salgado18 and I may add a few earlier cards when I get some time
Geforce 8800GT? ;)
Reply
JarredWaltonGPU King_V said:
Thanks for this.I have to admit, and maybe it's just the cooling limitation, but I did not at all expect the Vega 56 and Vega 64 to measure pretty much exactly what their official TDP numbers of 210W and 295W are.
Don't taunt me like this, LOL
LOL. I don't have all of the older generation GPUs, and in fact I'm missing a whole lot of potentially interesting models. But I do have 980 Ti, 980, 970, Fury X, and R9 390 that might be fun to check.
Yeah, what the heck -- I'm going to stuff in the 900 series while I work on some other writing. Thankfully, it's not too hard to just capture the data in the background while I work. Swap card, run Metro for 10 minutes, run FurMark for 10 minutes. Repeat. Check back to see new charts in... maybe by the end of the day. (I have I think a 770 and 780 as well sitting around. And R9 380 if I'm feeling ambitious.)
On Vega, the one thing that's shocking to me is the gap between GPU-Z and Powenetics. The Vega 64 appears to use about 80W on "other stuff" besides the GPU, if GPU-Z's power numbers are accurately reporting GPU-only power. The Vega 56 isn't nearly as high -- about a 45W difference between GPU-Z and Powenetics. That suggests the 18% increase in HBM2 clocks causes about a 75% increase in HBM2 power use (though some of it is probably VRMs and such as well).
Full figures:
Vega 64 Powenetics: 296.7W avg power during Metro, GPU-Z says 219.2W.
Vega 56 Powenetics: 209.4W avg power during Metro, GPU-Z says 164.6W.Reply
JarredWaltonGPU salgado18 said:
Geforce 8800GT? ;)Why stop there? I really do sort of wish I had an FX 5900 Ultra floating around. :D
Reply
bit_user Thanks for doing this!
JarredWaltonGPU said:
I do have 980 Ti, ... Fury X, and R9 390 that might be fun to check.These three are most interesting to me. I have a GTX 980 Ti and the Fury X was its AMD counterpart.
What's intriguing about the R9 390 is that it was the last 512-bit card. Hawaii was hot, in general (275 W?).
Reply
King_V JarredWaltonGPU said:
LOL. I don't have all of the older generation GPUs, and in fact I'm missing a whole lot of potentially interesting models. But I do have 980 Ti, 980, 970, Fury X, and R9 390 that might be fun to check.Yeah, what the heck -- I'm going to stuff in the 900 series while I work on some other writing. Thankfully, it's not too hard to just capture the data in the background while I work. Swap card, run Metro for 10 minutes, run FurMark for 10 minutes. Repeat. Check back to see new charts in... maybe by the end of the day. (I have I think a 770 and 780 as well sitting around. And R9 380 if I'm feeling ambitious.)
On Vega, the one thing that's shocking to me is the gap between GPU-Z and Powenetics. The Vega 64 appears to use about 80W on "other stuff" besides the GPU, if GPU-Z's power numbers are accurately reporting GPU-only power. The Vega 56 isn't nearly as high -- about a 45W difference between GPU-Z and Powenetics. That suggests the 18% increase in HBM2 clocks causes about a 75% increase in HBM2 power use (though some of it is probably VRMs and such as well).
Full figures:
Vega 64 Powenetics: 296.7W avg power during Metro, GPU-Z says 219.2W.
Vega 56 Powenetics: 209.4W avg power during Metro, GPU-Z says 164.6W.I'd most certainly be curious as to the R9 380 results, but that's because, if I understand it correctly, it's basically a slightly overclocked R9 285. Since I had an R9 285 (Gigabyte's Windforce OC version), well, the curiosity is there, LOL
And yep, with the Vegas, I was talking about the Powenetics numbers. I had assumed they'd blow past their official TDP numbers, but no, they're right in line, give or take 1-2 watts.
Reply
JarredWaltonGPU King_V said:
I'd most certainly be curious as to the R9 380 results, but that's because, if I understand it correctly, it's basically a slightly overclocked R9 285. Since I had an R9 285 (Gigabyte's Windforce OC version), well, the curiosity is there, LOLAnd yep, with the Vegas, I was talking about the Powenetics numbers. I had assumed they'd blow past their official TDP numbers, but no, they're right in line, give or take 1-2 watts.
So, my old R9 380 4GB card is dead. Actually, it works, but one of the fans is busted and it crashed multiple times in both Metro Exodus and FurMark, so I'm not going to put any more effort into that one. RIP, 380...
Anyway, I didn't show GPU clockspeeds, which is another dimension of the Vega numbers. The throttling in some of the tests is ... severe. This is why people undervolt, because otherwise the power use is just brutal on Vega. But undervolting can create instability and isn't a panacea.
In Metro, the Vega 56 averages GPU clocks of 1230MHz -- not too bad, considering the official spec is 1156MHz base, 1471MHz boost. Vega 64 averages 1494MHz, again with base of 1247MHz and boost of 1546MHz. But FurMark... Vega 56 drops to 835MHz average clocks, and Vega 64 is at 1270MHz. That's on 'reference' models. The PowerColor V56 is higher power and clocks, obviously. MSI V64 is also better, possibly just because of binning and being made a couple of months after the initial Vega launch.
Reply
salgado18 JarredWaltonGPU said:
Why stop there? I really do sort of wish I had an FX 5900 Ultra floating around. :DThat would be awesome, right? If only you could get an AGP motherboard :ROFLMAO:
Hey, throw some emails around, you might get some old cards borrowed. It would be very interesting to see the evolution in efficiency.
Reply
gdmaclew There must be hundreds of thousands of RX580s out there but you chose NOT to test them?
Perhaps you have other articles that do that?
You have something against Polaris?Reply
King_V gdmaclew said:
There must be hundreds of thousands of RX580s out there but you chose NOT to test them?
Perhaps you have other articles that do that?
You have something against Polaris?Oh, OBVIOUSLY he has something against Polaris, which is why he tested the RX 570 and RX 590. Clearly. :rolleyes:
Reply