Graphics Card Power Consumption and Efficiency Tested (2024)

Graphics Card Power Consumption and Efficiency Tested (1)

How much power do the

best graphics cards

use? It's an important question, and while the performance we show in our

GPU benchmarks

hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We're here to press the reset button on GPU power measurements and do things the right way.

There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like

GPU-Z

, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a

Kill-A-Watt

power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.

We've used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we're doing things the right way. And by "right way," we mean measuring in-line power consumption using hardware devices. Specifically, we're using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.

Image

1

of

2

Graphics Card Power Consumption and Efficiency Tested (2)
Graphics Card Power Consumption and Efficiency Tested (3)

After assembling the necessary bits and pieces — some soldering is required, and we have a list of the best soldering irons to help — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.

We've done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We've updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.

If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.

Test Setup

Tested GPUs

AMD GPUs:

Radeon RX 6900 XT 'reference'
Radeon RX 6800 XT 'reference'
Radeon RX 6800 'reference'
Radeon RX 6700 XT 'reference'
Radeon RX 5700 XT 'reference'
Radeon RX 5700 'reference'
Sapphire RX 5600 XT Pulse
Sapphire RX 5500 XT 8GB Pulse
Sapphire RX 5500 XT 4GB Pulse
AMD Radeon VII 'reference'
AMD Radeon RX Vega 64 'reference'
AMD Radeon RX Vega 56 'reference'
XFX RX 590 Fatboy
Sapphire RX 580 8GB Nitro+ LE
MSI RX 570 4GB Gaming X
MSI RX 560 4GB Aero
Radeon R9 Fury X
Sapphire R9 390 Nitro

Nvidia GPUs:

GeForce RTX 3090 FE
GeForce RTX 3080 FE
GeForce RTX 3070 FE
GeForce RTX 3060 Ti FE
EVGA GeForce RTX 3060 12GB
GeForce RTX 2080 Ti FE
GeForce RTX 2080 Super FE
GeForce RTX 2080 FE
GeForce RTX 2070 Super FE
GeForce RTX 2070 FE
GeForce RTX 2060 Super FE
GeForce RTX 2060 FE
EVGA GTX 1660 Ti XC
EVGA GTX 1660 Super
Zotac GTX 1660 Amp
Zotac GTX 1650 Super Twin
EVGA GTX 1650 GDDR6
Gigabyte GTX 1650 Gaming OC
GeForce GTX 1080 Ti FE
GeForce GTX 1080 FE
GeForce GTX 1070 Ti FE
GeForce GTX 1070 FE
GeForce GTX 1060 6GB FE
Zotac GTX 1060 3GB
MSI GTX 1050 Ti Gaming X
MSI GTX 1050 Gaming X
GeForce GTX 980 Ti
GeForce GTX 980
Zotac GeForce GTX 970
EVGA GeForce GTX 780

We're using our standard graphics card testbed for these power measurements, and it's what we'll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,

Intel Core i9-9900K CPU

, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.

There's a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.

Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we've tested in the list to the right.

From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,

Radeon VII

,

Vega 64

and

Vega 56

. AMD doesn't do 'reference' models on most other GPUs, so we've used third party designs to fill in the blanks.

For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the

RTX 2060

and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.

Note that all of the cards are running 'factory stock,' meaning there's no manual

overclocking

or

undervolting

is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)

Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We're only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.

Graphics Card Power Consumption and Efficiency Tested (4)

GPU Power Use While Gaming: Metro Exodus

Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it's a realistic view of the sort of power use you'll see when playing a game for hours on end.

Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we've grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.

Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia's GPUs are a bit faster so it mostly equals out.

Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That's particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).

It's pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you're still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we're missing many models from these generations of GPU. Perhaps the less said, the better, so let's move on.

Graphics Card Power Consumption and Efficiency Tested (15)

GPU Power with FurMark

FurMark, as we've frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium's Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Image

1

of

10

Graphics Card Power Consumption and Efficiency Tested (16)
Graphics Card Power Consumption and Efficiency Tested (17)
Graphics Card Power Consumption and Efficiency Tested (18)
Graphics Card Power Consumption and Efficiency Tested (19)
Graphics Card Power Consumption and Efficiency Tested (20)
Graphics Card Power Consumption and Efficiency Tested (21)
Graphics Card Power Consumption and Efficiency Tested (22)
Graphics Card Power Consumption and Efficiency Tested (23)
Graphics Card Power Consumption and Efficiency Tested (24)
Graphics Card Power Consumption and Efficiency Tested (25)

The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we're not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.

The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn't really have anything to compete against Nvidia's top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that's probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.

With the Pascal, Polaris, and Vega GPUs, AMD's GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that's pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That's thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.

The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that's more of an issue with the hardware not throttling to stay within spec. Anyway, it's great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!

Analyzing GPU Power Use and Efficiency

It's worth noting that we're not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.

The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It's not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!

Here's the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).

Swipe to scroll horizontally

Graphics CardGPU FPS (9 Games)GPU Power (Watts)Efficiency Score
RX 6800130.8235.4100.0%
RTX 3070116.6219.395.7%
RX 6700 XT112.0215.593.5%
RTX 3060 Ti106.3205.593.1%
RTX 3060 12GB83.6171.887.6%
RX 6900 XT148.1308.586.4%
RX 570078.4165.885.1%
RX 6800 XT142.8303.484.7%
GTX 1660 Super57.9124.283.9%
GTX 1660 Ti57.8124.083.8%
RTX 2080 Ti118.2259.482.0%
RX 5600 XT71.1158.380.8%
GTX 1650 GDDR636.483.378.7%
RTX 208095.5219.078.5%
RTX 2060 Super77.2177.378.4%
RTX 206068.5159.277.5%
RTX 3080142.1333.076.8%
RTX 2070 Super91.0213.376.8%
GTX 1650 Super43.5102.376.4%
RTX 3090152.7361.076.1%
Titan RTX121.4287.775.9%
Titan V104.9249.475.6%
GTX 166050.1119.275.6%
RTX 2080 Super102.1246.874.4%
RTX 207081.0196.174.4%
RX 5700 XT87.1215.172.9%
GTX 1050 Ti24.561.072.3%
GTX 107056.1141.771.3%
GTX 165031.982.569.6%
GTX 108069.1180.468.9%
Titan Xp93.3249.567.3%
RX 5500 XT 8GB48.6133.765.4%
GTX 1070 Ti63.9175.765.4%
GTX 1080 Ti88.2246.864.3%
GTX 1060 6GB40.4115.063.2%
GTX 105018.654.761.2%
Radeon VII89.9266.760.7%
RX 5500 XT 4GB43.3133.158.5%
GTX 1060 3GB34.0108.656.4%
RX Vega 5665.3210.555.8%
RX 560 4GB19.165.152.9%
RX Vega 6474.0297.044.8%
RX 570 4GB38.5163.142.5%
GTX 98040.4173.241.9%
GTX Titan X53.9232.141.8%
GTX 980 Ti50.3219.341.3%
RX 59049.4219.240.6%
GTX 97033.8150.440.4%
RX 58047.2214.239.6%
R9 Fury X50.0261.134.4%
R9 39041.5263.628.3%

This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It's a telling look at how far behind AMD was, and how far it's come with the latest Big Navi architecture.

Efficiency isn't the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.

The most efficient GPUs are a mix of AMD's Big Navi GPUs and Nvidia's Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia's GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.

Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you're still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.

Graphics Card Power Consumption and Efficiency Tested (26)

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

See more GPUs News

More about gpus

The GPU benchmarks hierarchy 2024: All recent graphics cards rankedRTX 5090 may be surprisingly svelte — twin-slot, twin-fan model on the way, says leaker

Latest

Internet Archive facing sustained cyber attacks — nonprofit struggles with 'impactful, targeted, adaptive' DDoS campaign
See more latest►

77 CommentsComment from the forums

  • King_V

    Thanks for this.

    I have to admit, and maybe it's just the cooling limitation, but I did not at all expect the Vega 56 and Vega 64 to measure pretty much exactly what their official TDP numbers of 210W and 295W are.

    and I may add a few earlier cards when I get some time

    Don't taunt me like this, LOL

    Reply

  • salgado18
    and I may add a few earlier cards when I get some time

    Geforce 8800GT? ;)

    Reply

  • JarredWaltonGPU

    King_V said:

    Thanks for this.

    I have to admit, and maybe it's just the cooling limitation, but I did not at all expect the Vega 56 and Vega 64 to measure pretty much exactly what their official TDP numbers of 210W and 295W are.

    Don't taunt me like this, LOL

    LOL. I don't have all of the older generation GPUs, and in fact I'm missing a whole lot of potentially interesting models. But I do have 980 Ti, 980, 970, Fury X, and R9 390 that might be fun to check.

    Yeah, what the heck -- I'm going to stuff in the 900 series while I work on some other writing. Thankfully, it's not too hard to just capture the data in the background while I work. Swap card, run Metro for 10 minutes, run FurMark for 10 minutes. Repeat. Check back to see new charts in... maybe by the end of the day. (I have I think a 770 and 780 as well sitting around. And R9 380 if I'm feeling ambitious.)

    On Vega, the one thing that's shocking to me is the gap between GPU-Z and Powenetics. The Vega 64 appears to use about 80W on "other stuff" besides the GPU, if GPU-Z's power numbers are accurately reporting GPU-only power. The Vega 56 isn't nearly as high -- about a 45W difference between GPU-Z and Powenetics. That suggests the 18% increase in HBM2 clocks causes about a 75% increase in HBM2 power use (though some of it is probably VRMs and such as well).

    Full figures:
    Vega 64 Powenetics: 296.7W avg power during Metro, GPU-Z says 219.2W.
    Vega 56 Powenetics: 209.4W avg power during Metro, GPU-Z says 164.6W.

    Reply

  • JarredWaltonGPU

    salgado18 said:

    Geforce 8800GT? ;)

    Why stop there? I really do sort of wish I had an FX 5900 Ultra floating around. :D

    Reply

  • bit_user

    Thanks for doing this!

    JarredWaltonGPU said:

    I do have 980 Ti, ... Fury X, and R9 390 that might be fun to check.

    These three are most interesting to me. I have a GTX 980 Ti and the Fury X was its AMD counterpart.

    What's intriguing about the R9 390 is that it was the last 512-bit card. Hawaii was hot, in general (275 W?).

    Reply

  • King_V

    JarredWaltonGPU said:

    LOL. I don't have all of the older generation GPUs, and in fact I'm missing a whole lot of potentially interesting models. But I do have 980 Ti, 980, 970, Fury X, and R9 390 that might be fun to check.

    Yeah, what the heck -- I'm going to stuff in the 900 series while I work on some other writing. Thankfully, it's not too hard to just capture the data in the background while I work. Swap card, run Metro for 10 minutes, run FurMark for 10 minutes. Repeat. Check back to see new charts in... maybe by the end of the day. (I have I think a 770 and 780 as well sitting around. And R9 380 if I'm feeling ambitious.)

    On Vega, the one thing that's shocking to me is the gap between GPU-Z and Powenetics. The Vega 64 appears to use about 80W on "other stuff" besides the GPU, if GPU-Z's power numbers are accurately reporting GPU-only power. The Vega 56 isn't nearly as high -- about a 45W difference between GPU-Z and Powenetics. That suggests the 18% increase in HBM2 clocks causes about a 75% increase in HBM2 power use (though some of it is probably VRMs and such as well).

    Full figures:
    Vega 64 Powenetics: 296.7W avg power during Metro, GPU-Z says 219.2W.
    Vega 56 Powenetics: 209.4W avg power during Metro, GPU-Z says 164.6W.

    I'd most certainly be curious as to the R9 380 results, but that's because, if I understand it correctly, it's basically a slightly overclocked R9 285. Since I had an R9 285 (Gigabyte's Windforce OC version), well, the curiosity is there, LOL

    And yep, with the Vegas, I was talking about the Powenetics numbers. I had assumed they'd blow past their official TDP numbers, but no, they're right in line, give or take 1-2 watts.

    Reply

  • JarredWaltonGPU

    King_V said:

    I'd most certainly be curious as to the R9 380 results, but that's because, if I understand it correctly, it's basically a slightly overclocked R9 285. Since I had an R9 285 (Gigabyte's Windforce OC version), well, the curiosity is there, LOL

    And yep, with the Vegas, I was talking about the Powenetics numbers. I had assumed they'd blow past their official TDP numbers, but no, they're right in line, give or take 1-2 watts.

    So, my old R9 380 4GB card is dead. Actually, it works, but one of the fans is busted and it crashed multiple times in both Metro Exodus and FurMark, so I'm not going to put any more effort into that one. RIP, 380...

    Anyway, I didn't show GPU clockspeeds, which is another dimension of the Vega numbers. The throttling in some of the tests is ... severe. This is why people undervolt, because otherwise the power use is just brutal on Vega. But undervolting can create instability and isn't a panacea.

    In Metro, the Vega 56 averages GPU clocks of 1230MHz -- not too bad, considering the official spec is 1156MHz base, 1471MHz boost. Vega 64 averages 1494MHz, again with base of 1247MHz and boost of 1546MHz. But FurMark... Vega 56 drops to 835MHz average clocks, and Vega 64 is at 1270MHz. That's on 'reference' models. The PowerColor V56 is higher power and clocks, obviously. MSI V64 is also better, possibly just because of binning and being made a couple of months after the initial Vega launch.

    Reply

  • salgado18

    JarredWaltonGPU said:

    Why stop there? I really do sort of wish I had an FX 5900 Ultra floating around. :D

    That would be awesome, right? If only you could get an AGP motherboard :ROFLMAO:

    Hey, throw some emails around, you might get some old cards borrowed. It would be very interesting to see the evolution in efficiency.

    Reply

  • gdmaclew

    There must be hundreds of thousands of RX580s out there but you chose NOT to test them?
    Perhaps you have other articles that do that?
    You have something against Polaris?

    Reply

  • King_V

    gdmaclew said:

    There must be hundreds of thousands of RX580s out there but you chose NOT to test them?
    Perhaps you have other articles that do that?
    You have something against Polaris?

    Oh, OBVIOUSLY he has something against Polaris, which is why he tested the RX 570 and RX 590. Clearly. :rolleyes:

    Reply

Most Popular
Huawei patent reveals 3nm-class process technology plans — China continues to move forward despite US sanctions
Ryzen 9000 Zen 5 CPU trails Core i9-14900K in leaked benchmark — Granite Ridge 5.8 GHz CPU shows Core i9-13900K-like single-threaded performance in CPU-Z
Liquid-cooled RTX 4090 Suprim Fuzion Frankencard crams tubes and radiator into its monstrous 4.5-slot form factor
HP's latest 37-inch WQHD+ ultrawide offers IPS Black panel aimed at professionals who don't want to pay more for OLED
Asus NUC 14 Performance mini-PC launched — combines up to Core Ultra 9 185H and RTX 4070
MSI Z890 and B860 motherboards for Intel Arrow Lake-S leak via online database — devices certified ahead of Computex 2024
ASML sets new EUV chipmaking density record, proposes Hyper-NA tools and radical EUV speed boosts
China pours more money into Big Fund III: $47.5 billion and counting
Gigabyte's new motherboard supports massive 128-pound GPUs — Reinforced slots still can't fix GPU PCB cracking
US sanctions against China boost UMC's production — Taiwanese fab sees growth, but so does China
Graphics Card Power Consumption and Efficiency Tested (2024)

FAQs

What is a good power consumption for a GPU? ›

How much power does your graphics card need? The top models demand between 110 and 270 watts from the power supply; in fact, a powerful graphics card under full load requires as much power as the rest of the components of a PC system combined.

How do I check my graphics card efficiency? ›

Here's how:
  1. Press Windows + I to open the Windows Settings app.
  2. Select System from the left pane, then click Display in the right pane.
  3. Now, navigate to the Advanced display option and click on it.
  4. Click the Display adapter properties for Display 1, then select Properties next to the GPU you want to test.
May 15, 2024

How do I check my graphics card power consumption? ›

You can access this information through software provided by the graphics card manufacturer. To check the power consumption, simply open the graphics card software and navigate to the power monitoring section. Here, you should be able to see the current power draw of your graphics card in watts.

What is the most power efficient GPU? ›

Best graphics cards for gaming, at a glance
Graphics Card1080p FPSAvg. Power
GeForce RTX 4060 Ti 16GB101.3148W
GeForce RTX 4060 Ti101.8139W
GeForce RTX 406082.7124W
Radeon RX 7900 XTX149.1334W
18 more rows
4 days ago

Should GPU power be at 100%? ›

You want 100% GPU utilization in games, it means the GPU is running at full capacity and is putting as many images on the screen as possible. It means the GPU is not being limited in performance by other components in the system. Oh alright, thank you!

Is AMD more power efficient than Nvidia? ›

For some perspective, Nvidia's RTX 40-series graphics cards generally consume far less energy than AMD's RX 7000 series competitors. Nvidia's RTX 4080, for instance, has a 320W power rating but typically only uses around 300W in gaming, according to our tests.

How do I know if my graphics card is underpowered? ›

When a graphics card is underpowered it will drop the performance levels and display graphics at a lower quality. The graphics downgrade will usually trigger if you run a graphics-heavy game or application that pushes the card to its limits.

How can I improve my GPU efficiency? ›

Ways to boost the GPU performance
  1. Keep the dust off. ...
  2. Ensure your laptop is plugged in. ...
  3. Update your drivers. ...
  4. Using AI-based algorithms. ...
  5. Enabling synchronized monitor refresh rate. ...
  6. GPU Overclocking.
Nov 30, 2023

How do you tell if a graphics card is good or not? ›

The GPU's clock speed is an indication of how quickly it can process the graphics information before sending it to the monitor. It's one of the specifications of a graphics card that increases with price. Lower-end GPUs have slower clock speeds while higher-end GPUs have higher clock speeds.

How do I check my graphics card usage? ›

Open the Start menu or the desktop search bar, start typing task manager, and select it when the option appears. You can also press Ctrl + Alt + Del on your keyboard and click Task Manager on the list that appears. On the Task Manager window, click Performance and select GPU.

How do I know how powerful my graphics card is? ›

Consider the following performance metrics: Frame rates: Measured in frames per second (FPS), this indicates how smoothly graphics are rendered. For gaming, 60 FPS is generally considered smooth, while competitive gamers often aim for 144 FPS or higher.

How do I know if my graphics card power supply is enough? ›

The simplest way is to check the label on your PSU. It typically shows the maximum wattage it can provide, representing the PSU's capacity. It's a good practice to choose a PSU with slightly more wattage than you calculated to be safe.

How do I reduce GPU power consumption? ›

Undervolting is basically this same principle in reverse: instead of saying “run the GPU at a higher frequency for the same voltage,” you're telling it to “run the GPU at the same frequency for lower voltage.” You'll get around the same performance as you would at stock settings, but with lower GPU power usage, ...

How much power should my GPU be using? ›

CPU: Most desktop CPUs will use between 65-150 watts. GPU: Graphic cards can vary significantly from around 30 watts up to 1000 watts for high-end models. RAM: Each module of RAM usually uses between 2-3.5 watts. Hard drives: Hard drives typically use between 5-10 watts each depending on type (e.g., SSD, HDD).

Is it OK to have a more powerful CPU than GPU? ›

It depends on your use cases. If you're mostly doing gaming, you'll want the best GPU you can get, and you can skimp on the CPU a bit. If you're doing emulation, you'll want a CPU with a high single core speed. If you're doing any kind of 3D modeling work, you'll want a strong GPU and CPU with a good amount of cores.

How many watts recommended for GPU? ›

Power Wattage Guide
GPUCPURecommended power(W)
RTX 3090 (TDP 350W)Core i5-12600K700
RTX 3080 (TDP 320W)Core i9-12900K800
Core i7-12700K700
Core i5-12600K700
14 more rows

How much GPU power is good for gaming? ›

Generally above 90% is what would be good. If it's way below 90%, like for example 60% or even worse say 30% then it means your CPU is very weak and not providing data to the GPU fast enough, and you are not getting the true value out of your GPU.

What is the ideal GPU usage? ›

GPU utilization reflects how much of the graphics card's power is in use at a given time. You'll find normal ranges from 60% to 90% for gaming. 100% usage can occur in more intensive applications. Lower usage below 40% may indicate that the GPU is not fully leveraged.

What is the average power consumption of RTX 3080? ›

GeForce RTX 3080 10GB / 20GB kWh POWER CONSUMPTION & COST
GPU Power Draw Watts - TDPDaily Power Usage kWhPoints per kWh
320 W7.68 kWh628,403

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Jeremiah Abshire

Last Updated:

Views: 5551

Rating: 4.3 / 5 (74 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.