If you’re a PC gamer, or a content creator who lives and dies by the speed of your graphics-accelerated software, your video card is the engine that powers what you can do—or how lustily you can brag.
Our guide will help you sort through the best video-card options for your desktop PC, what you need to know to upgrade a system, and how to evaluate whether a particular card is a good buy. We’ll also touch on some upcoming trends—they could affect which card you choose. After all, consumer video cards range from under $100 to well over $1,499. It’s easy to overpay or underbuy. (We won’t let you do that, though.)
Who’s Who in GPUs: AMD vs. Nvidia
First off, what does a graphics card do? And do you really need one?
If you’re looking at any given prebuilt desktop PC on the market, unless it’s a gaming-oriented machine, PC makers will de-emphasize the graphics card in favor of promoting CPU, RAM, or storage options. Indeed, sometimes that’s for good reason; a low-cost PC may not have a graphics card at all, relying instead on the graphics-accelerated silicon built into its CPU (an “integrated graphics processor,” commonly called an “IGP”). There’s nothing inherently wrong with relying on an IGP—most business laptops, inexpensive consumer laptops, and budget-minded desktops have them—but if you’re a gamer or a creator, the right graphics card is crucial.
A modern graphics solution, whether it’s a discrete video card or an IGP, handles the display of 2D and 3D content, drawing the desktop, and decoding and encoding video content in programs and games. All of the discrete video cards on the consumer market are built around large graphics processing chips designed by one of two companies: AMD or Nvidia. These processors are referred to as “GPUs,” for “graphics processing units,” a term that is also applied, confusingly, to the graphics card itself. (Nothing about graphics cards…ahem, GPUs…is simple!)
The two companies work up what are known as “reference designs” for their video cards, a standardized version of a card built around a given GPU. Sometimes these reference-design cards are sold directly by Nvidia (or, less often, by AMD).
Nvidia’s own brand of cards are spotted easily by the “Founders Edition” badge, something that until the release of Nvidia’s latest, the GeForce RTX 3000 series, didn’t mean much more than slightly higher clock speeds from stock and sturdy build quality. Often the Founders Editions cards are the most aesthetically consistent of any cards that might come out during the lifetime of a particular GPU. But their designs tend to be conservative, not as accommodating to aggressive overclocking or modification as some third-party options are.
Sometimes, reference cards are duplicated by third-party card makers (companies referred to in industry lingo as AMD or Nvidia “board partners”), such as Asus, EVGA, MSI, Gigabyte, Sapphire, XFX, and Zotac. Depending on the graphics chip in question, these board partners may sell their own self-branded versions of the reference card (adhering to the design and specifications set by AMD or Nvidia), or they will fashion their own custom products, with different cooling-fan designs, slight overclocking done from the factory, or features such as LED mood illumination. Some board partners will do both—that is, sell reference versions of a given GPU, as well as its own, more radical designs.
Who Needs a Discrete GPU?
We mentioned integrated graphics (IGPs) above. IGPs are capable of meeting the needs of most general users today, with three broad exceptions…
Professional Workstation Users. These folks, who work with CAD software or in video and photo editing, will still benefit greatly from a discrete GPU. Some of their key applications can transcode video from one format to another, or perform other specialized operations using resources from the GPU instead of (or in addition to) those of the CPU. Whether this is faster will depend on the application in question, which specific GPU and CPU you own, and other factors.
Productivity-Minded Users With Multiple Displays. People who need a large number of displays can also benefit from a discrete GPU. Desktop operating systems can drive displays connected to the IGP and discrete GPUs simultaneously. If you’ve ever wanted five or six displays hooked up to a single system, you can combine an IGP and a discrete GPU to get there.
That said, you don’t necessarily need a high-end graphics card to do that. If you’re simply displaying business applications, multiple browser windows, or lots of static windows across multiple displays (i.e., not demanding PC games), all you need is a card that supports the display specifications, resolutions, monitor interfaces, and number of panels you need. If you’re showing four web browsers across four display panels, a GeForce RTX 3080 card, say, won’t confer any greater benefit than a GeForce GTX 1660 with the same supported outputs.
Gamers. And of course, there’s the gaming market, to whom the GPU is arguably the most important component. RAM and CPU choices both matter, but if you have to pick between a top-end system circa 2018 with a 2020 GPU or a top-end system today using the highest-end GPU you could buy in 2018, you’d want the former.
Graphics cards fall into two distinct classes: consumer cards meant for gaming and light content creation work, and dedicated cards meant for professional workstations and geared toward scientific computing, calculations, and artificial intelligence work. This guide, and our reviews, will focus on the former, but we’ll touch on workstation cards a little bit, later on. The key sub-brands you need to know across these two fields are Nvidia’s GeForce and AMD’s Radeon RX (on the consumer side of things), and Nvidia’s Titan and Quadro, as well as AMD’s Radeon Pro and Radeon Instinct (in the pro workstation field). Nvidia continues to dominate the very high end of both markets.
For now though, we’ll focus on the consumer cards. Nvidia’s consumer card line in late 2020 is broken into two distinct classes, both united under the long-running GeForce brand: GeForce GTX, and GeForce RTX. AMD’s consumer cards, meanwhile, comprise the Radeon RX and (now fading) Radeon RX Vega families, as well as the end-of-life Radeon VII. Before we get into the individual lines in detail, though, let’s outline a few very important considerations you should make for any video-card purchase before you hit the checkout button on your cart.
Target Resolution and Monitor Tech: Your First Considerations
Resolution is the horizontal-by-vertical pixel count at which your video card will drive your monitor. This has a huge bearing on which card to buy, and how much you need to spend, when looking at a video card from a gaming perspective.
If you are a PC gamer, a big part of what you’ll want to consider is the resolution(s) at which a given video card is best suited for gaming. Nowadays, even low-end cards will display everyday programs at lofty resolutions like 3,840 by 2,160 pixels (a.k.a., 4K). But for strenuous PC games, those cards will not have nearly the power to drive smooth frame rates at high resolutions like those. In games, the video card is what calculates positions, geometry, and lighting, and renders the onscreen image in real time. For that, the higher the in-game detail level and monitor resolution you’re running, the more graphics-card muscle is required.
Resolution Is a Key Decision Point
The three most common resolutions at which today’s gamers play are 1080p (1,920 by 1,080 pixels), 1440p (2,560 by 1,440 pixels), and 2160p or 4K (3,840 by 2,160 pixels). Generally speaking, you’ll want to choose a card suited for your monitor’s native resolution. (The “native” resolution is the highest supported by the panel, and the one at which the display looks the best.)
You’ll also see ultra-wide-screen monitors with in-between resolutions (3,440 by 1,440 pixels is a common one); you can gauge these versus 1080p, 1440p, and 2160p by calculating the raw number of pixels for each (multiply the vertical number by the horizontal one) and seeing where that screen resolution fits in relative to the common ones. (See our targeted roundups of the best graphics cards for 1080p play and the best graphics cards for 4K gaming.)
Why does this matter? Well, in the case of PC gaming, the power of the components inside your next PC—whether you are buying one, building one, or upgrading—should be distributed in a way that best suits the way you want to play.
Without getting too deep into the weeds, here’s how it works: The frame rates you’ll see when gaming at 1080p, even at the highest detail levels, are almost always down to some balance of CPU and GPU power, rather than either one being the outright determinant of peak frame rates.
Next is the 1440p resolution, which starts to split the load when you are playing at higher detail levels. Some games start to ask more of the GPU, while others can still lean on the CPU for the heavy math. (It depends on how the game has been optimized by the developer.) Then there’s 4K resolution, where, in most cases, almost all of the lifting is done exclusively by the GPU.
Consider how things shook out in our recent testing of the first card in the GeForce RTX 30 series, the GeForce RTX 3080…
While the GeForce RTX 3080 posted a result of 93 frames per second (fps) in Far Cry 5 at 4K resolution (just over a 50 percent improvement over last generation’s GeForce RTX 2080 Super at 61fps), its performance at 1080p resolution was just 11 percent improved. This is because at the extremes of 1080p gaming, the CPU can be a more of a factor than it is at 4K.
Now, of course, you can always dial down the detail levels for a game to make it run acceptably at a higher-than-recommended resolution, or dial back the resolution itself. But to an extent, that defeats the purpose of a graphics card purchase. The highest-end cards are meant for 4K play or for playing at very high refresh rates at 1080p or 1440p; you don’t have to spend $1,000 or even $500 to play more than acceptably at 1080p.
In short: Always buy the GPU that fits the monitor you either play on today or plan to own in the near future. There are plenty of midrange GPUs that can power 1440p displays at their peak, and 4K is still, even in the second half of 2020, a fringe display resolution for the most active PC gamers if the Steam Hardware Survey is any indication. (It saw less than 3 percent of users playing at 4K in late 2020.)
High-Refresh Gaming: Why High-End GPUs Matter
Another thing to keep abreast of is a trend in gaming that’s gained major momentum in recent years: high-refresh gaming monitors. For ages, 60Hz (or 60 screen redraws a second) was the panel-refresh ceiling for most PC monitors, but that was before the genre of esports really hit its stride.
Panels focused on esports and high-refresh gaming may support up to 144Hz, 240Hz, or even 360Hz for smoother gameplay. What this means: If you have a video card that can consistently push frames in a given game in excess of 60fps, on a high-refresh monitor you may be able to see those formerly “wasted” frames in the form of smoother game motion.
Powered by esports success stories (like 16-year-old Fortnite prodigy Bugha turning into a multi-millionaire overnight), the demand has surged in recent years for high-refresh monitors that can keep esports hopefuls playing at their peak. And while 1080p is still overwhelmingly the preferred resolution for competitive players across all game genres, many are following the trends that monitors are setting first.
The number of players moving up to the 1440p bracket of graphical resolutions (played in either 16:9 aspect ratio at 2,560 by 1,440 pixels, or in 21:9 at 3,440 by 1,440) is growing faster than ever, thanks in no small part to recent game-monitor entries like the ViewSonic XG270QG, which finally marries the worlds of high-refresh and high-quality panels. To an extent, the cards and the panels are playing a game of leapfrog themselves.
Gaming at a higher resolution does have its benefits for those who want to hit their opponents with pixel-perfect precision, but just as many esports hopefuls and currently salaried pros still swear by playing at resolutions as low as 720p in games like Counter-Strike: Global Offensive. So all told, your mileage may vary, depending on the way you prefer to play.
Most casual gamers won’t care about extreme refresh rates, but the difference is marked if you play fast-action titles, and competitive esports hounds find the fluidity of a high refresh rate a competitive advantage. (See our picks for the best gaming monitors, including high-refresh models.) In short: Buying a powerful video card that pushes high frame rates can be a boon nowadays even for play at a pedestrian resolution like 1080p, if paired with a high-refresh monitor.
Finally, keep HDR compatibility in mind. More and more monitors these days—including almost every one of our Editors’ Choice picks for best gaming monitor this year—support HDR at some level. And while in our testing HDR 10 and HDR 400 monitors don’t often make much impact for their HDR image quality, any monitors above the HDR 600 spec should factor into your GPU decision as both a display for gaming and one for HDR-enhanced content.
Monitor buyers should also make sure the model they choose supports HDR transfer at a refresh rate and bitrate that a new card can support. It’s a dance, but one that can pay off beautifully on content creation and gaming monitors all the same.
FreeSync vs. G-Sync: Jets! Sharks! Maria?
Should you buy a card based on whether it supports one of these two venerable specs for smoothing gameplay? It depends on the monitor you have.
FreeSync (AMD’s solution) and G-Sync (Nvidia’s) are two sides of the same coin, a technology called adaptive sync. With adaptive sync, the monitor displays at a variable refresh rate led by the video card; the screen draws at a rate that scales up and down according to the card’s output capabilities at any given time in a game. Without it, wobbles in the frame rate can lead to artifacts, staggering/stuttering of the onscreen action, or screen tearing, in which mismatched screen halves display momentarily. Under adaptive sync, the monitor draws a full frame only when the video card can deliver a whole frame.
The monitor you own may support FreeSync or G-Sync, or neither. FreeSync is much more common, as it doesn’t add to a monitor’s manufacturing cost; G-Sync requires dedicated hardware inside the display. You may wish to opt for one GPU maker’s wares or the other’s based on this, but know that the tides are changing on this front. At CES 2019, Nvidia announced a driver tweak that will allow FreeSync-compatible monitors to use adaptive sync with late-model Nvidia GeForce cards, and a rapidly growing subset of FreeSync monitors has been certified by Nvidia as “G-Sync Compatible.” So the choice may not be as black and white (or as red or green) as it has been for years.
We’ve tested both, and unless you’re competing in a CS:GO or Overwatch pro-am circuit, you might be hard-pressed to see any consistent difference between the two in 2020 models. Screen tearing was a more difficult problem to solve back when G-Sync was first introduced, and these days both FreeSync and G-Sync-Compatible monitors work well enough that only expert eyes can tell the difference.
Meet the Radeon and GeForce Families
Now that we’ve discussed the ways these two rival gangs have come together in recent years, now let’s talk about what makes them different. The GPU lines of the two big graphics-chip makers are constantly evolving, with low-end models suited to low-resolution gameplay ranging up to elite-priced models for gaming at 4K and/or very high refresh rates. Let’s look at Nvidia’s first.
A Look at Nvidia’s Lineup
The main part of the company’s current line is split between cards using last-generation (a.k.a. “20-series”) GPUs dubbed the “Turing” line, and newer GTX 1600 series cards, also based on Turing architecture. The very newest introductions to the stack, the GeForce RTX 30-Series cards of fall 2020, are based on GPUs using an architecture called “Ampere.”
Here’s a quick rundown of the currently relevant card classes in the “Pascal” (Turing’s predecessor), Turing, and Ampere families, their rough pricing, and their usage cases…
If you’re a longtime observer of the market, you’ll notice that many of the aging GeForce GTX Pascal cards like the GTX 1070 and GTX 1080 are not listed above. They have long since sold through and are largely found on the second-hand market in 2020 in favor of their GeForce RTX successors. The GeForce GTX 1060 has met a similar fate due to the release of the GeForce GTX 1660 and 1660 Ti. The lowest-end Pascals (the GeForce GT 1030, GTX 1050, and GTX 1050 Ti) keep hanging on, though.
But first, let’s talk Turing. When Nvidia launched its line of 20 Series GPUs in September of 2018, the reaction was mixed. On the one hand, the company was offering up some of the most powerful GPUs seen to date, complete with new and exciting technologies like ray-tracing and DLSS.
But on the other, at the time of the Turing launch, no games supported ray-tracing or DLSS. Even two years later, the library of titles that supports DLSS 2.0 on its own or combined with ray-tracing is limited, totaling just seven in all.
At the same time, Nvidia also moved the goalposts for high-end GPU pricing, compared to past generations. The GeForce RTX 2080 Ti, the company’s new flagship graphics card, would hit shelves in excess of $1,000, and the next card down, the $699 GeForce RTX 2080, wasn’t much better.
The company course-corrected in 2019, releasing the GeForce RTX 2060 Super, RTX 2070 Super, and RTX 2080 Super (upticked versions of the existing cards) at the same time that AMD was launching its AMD Radeon RX 5700 and RX 5700 XT midrange GPUs. Covering both the RTX and GTX segments, Nvidia’s Super cards boost the specs of each card they’re meant to replace in the stack (some more effectively than others).
This all brings us to September 2020, and the launch of the GeForce RTX 30 Series. Nvidia unveiled new GeForce RTX 3070, GeForce RTX 3080, and GeForce RTX 3090 GPUs, which are a big enough deal to merit their own spec breakout…
The cards, built off Samsung’s 8nm process, are a generational leap, moving RT cores to their second generation, Tensor cores to their third, and the memory type from GDDR6 to GDDR6X. Reworked PCBs have mandated tons of new innovations in everything from the placement of various modules and chips on the PCB to the inner workings of a brand-new heatsink. This video explains the changes in greater detail…
As far as how the 30 Series has affected costs up and down the Nvidia card stack, we’d still classify the GeForce GT 1030 to GTX 1050 as low-end cards, at under $100 or a little above. The GTX 1650/GTX 1650 Super to GTX 1660 Ti make up Nvidia’s current midrange, spanning about $150 to $300 or a little higher. The high end got a whole lot more complicated with the release of the GeForce RTX 30 Series. We’d put the GeForce RTX 3080 and RTX 3090 in an “elite” high-end category separate from the RTX 2060, RTX 2070, RTX 2080 and coming RTX 3070.
The results of the 30 Series announcement squashed the hopes of any RTX 20 Series owners who were perhaps hoping to get good resale value off their old GPUs during the upgrade cycle. GeForce RTX 2080 Ti cards, rarely less than $900 on eBay a few weeks before the Ampere launch, now go for as low as $500 used. Most GeForce RTX 20 Series cards just saw their price-to-performance ratios plummet, and things could get especially grim for RTX 20 Series sellers if Nvidia delivers on its promise that the upcoming $499 GeForce RTX 3070 will outperform the RTX 2080 Ti in certain gaming benchmarks.
A Look at AMD’s Lineup
As for AMD’s card classes, as 2020 draws to a close the company is stronger than it has been for some time, competing ably enough with Nvidia’s low-end and mainstream cards. It’s weaker at the high end, though, and it puts up almost no resistance to the elite class…
The aging Radeon RX 550 and RX 560 comprise the low end, while the Radeon RX 570 to RX 590 are the midrange and ideal for 1080p gaming, though their time is limited, given the company’s newest additions to its 1080p-play arsenal, the Radeon RX 5500 XT and the Radeon RX 5600 XT. The Radeon RX 580, RX Vega 56, and RX Vega 64 cards, the first a great-value 1080p card and the latter two good for both 1080p and 1440p play, are also on their way out.
Indeed, the 1080p and especially the 1440p AMD cards have seen a shakeup. The company released the first of its new, long-awaited line of 7nm-based “Navi” midrange graphics cards in July of 2019 at the company’s Tech Day E3 event, based on a whole new architecture AMD calls Radeon DNA (RDNA). The first three cards are the Radeon RX 5700, the Radeon RX 5700 XT, and the limited-run Radeon RX 5700 XT Anniversary Edition. All these cards have their sights pinned on the 1440p gaming market. Each, indeed, powers demanding AAA titles at above 60fps in that resolution bracket.
The Radeon VII is AMD’s sole player in the elite bracket; it trades blows with the GeForce RTX 2080 at 4K but generally performs less well at lower resolutions in games. It’s intended more for content creators needing GPU acceleration. It’s also end-of-life.
Why the Radeon VII is going away isn’t a mystery: News of AMD’s rumored “Big Navi” high-end cards is imminent. (AMD has an event planned for the end of October.) But because everything about these cards remains rumor, we won’t say more on the topic until AMD confirms it.
Pricing: How Much Should You Spend?
It’s been a turbulent couple of years in the arena of GPU pricing, with cryptocurrency miners inflating card prices to the sky in 2018, companies sniping each other’s launches, and changing MSRPs at the midnight hour. Plus, AMD and Nvdia have packed out the midrange-to-low-end market with so many options that going down the line these days, you could spend $20 more or less on a new GPU in either direction, and you’d likely have three distinct tiers of card model to choose from.
These days AMD and Nvidia both are targeting light 1080p gaming in the $80-to-$180 price range, higher-end 1080p and entry-level 1440p in cards between $200 to $300, and light-to-high-detail 1440p gaming between $300 and $400.
If you want a card that can handle 4K ably, you’ll need to spend more than $400…at the very least. A GPU that can push 4K gaming at high detail levels will cost $499 to $1,500. Cards in the $150-to-$350 market generally offer performance improvements in line with their additional cost. If a card is a certain amount costlier than another, the increase in performance is usually proportional to the increase in price. In the high-end and elite-level card stacks, though, this rule falls away; spending more money yields diminishing returns.
Graphics Card Basics: Understanding the Core Specs
Now, our comparison charts above should give you a good idea of which card families you should be looking at, based on your monitor and your target resolution (and your budget). A few key numbers are worth keeping in mind when comparing cards, though: the graphics processor’s clock speed, the onboard VRAM (that is, how much video memory it has), and—of course!—the pricing.
When comparing GPUs from the same family, a higher base clock speed (that is, the speed at which the graphics core works) and more cores signify a faster GPU. Again, though: That’s only a valid comparison between cards in the same product family based on the same GPU. For example, the base clock on the GeForce GTX 2080 is 1,733MHz, while the base clock is 1,759MHz on a (factory overclocked) Republic of Gamers Strix version of the GTX 1080 (a different chip) from Asus in its out-of-the-box Gaming Mode. Looking at two GTX 2080s versus one another in terms of clock speed would be valid, though.
Note that this base clock measure is distinct from the graphics chip’s boost clock. The boost clock is the speed to which the graphics chip can accelerate temporarily when under load, as thermal conditions allow. This can also vary from card to card in the same family. It depends on the robustness of the cooling hardware on the card and the aggressiveness of the manufacturer in its factory settings. The top-end partner cards with giant multi-fan coolers will tend to have the highest boost clocks for a given GPU.
This is to say nothing of AMD’s new category of GPU speed: “game clock.” According to the company, game clock represents the “average clock speed gamers should expect to see across a wide range of titles,” a number that the company’s engineers gathered during a test of 25 different titles on the new Navi lineup of cards. We mention this so that you don’t compare game clocks to boost or base clocks, which game clock decidedly is not.
Understanding Onboard Video-Card Memory
The amount of onboard video memory (sometimes referred to by the rusty term “frame buffer”) is usually matched to the requirements of the games or programs that the card is designed to run. In a certain sense, from a PC-gaming perspective, you can count on a video card to have enough memory to handle current demanding games at the resolutions and detail levels that the card is suited for. In other words, a card maker generally won’t overprovision a card with more memory than it can realistically use; that would inflate the pricing and make the card less competitive. But there are some wrinkles to this.
A card designed for gameplay at 1,920 by 1,080 pixels (1080p) these days will generally be outfitted with 4GB or 6GB of RAM, while cards geared more toward play at 2,560 by 1,440 pixels (1440p) or 3,840 by 2,160 (2160p, or 4K) tend to deploy 8GB or more. Usually, for cards based on a given GPU, all of the cards have a standard amount of memory.
The wrinkles: In some isolated but important cases, card makers offer versions of a card with the same GPU but different amounts of VRAM. Some key ones to know nowadays: cards based on the Radeon RX 5500 XT and RX 580 (4GB versus 8GB). Both are GPUs you’ll find in popular midrange cards a bit above or below $200, so mind the memory amount on these. The cheaper versions will have less.
Now, if you’re looking to spend $150 or more on a video card, with the idea of all-out 1080p gameplay, a card with at least 4GB of memory really shouldn’t be negotiable. Both AMD and Nvidia now outfit their $200-plus GPUs with more VRAM than this. (AMD has stepped up to 8GB on its RX-series cards, with 16GB on its Radeon VII, while Nvidia is using 6GB or 8GB on most, with 24GB on its elite GeForce RTX 3090.) Either way, sub-4GB cards should only be used for secondary systems, gaming at low resolutions, or simple or older games that don’t need much in the way of hardware resources.
For creators, it’s an entirely different ballgame. In many 3D rendering programs (as well as VFX workflows, modeling, and video editing), specs like the boost clock speed are almost less important of a decision point than the amount of onboard VRAM. The more VRAM, the faster the memory, and the larger the bandwidth pipe a card has, the better it will be (in most cases) for a task like rendering out a complex VFX scene that has thousands, if not millions, of different elements to calculate at once.
Memory bandwidth is another spec you will see. It refers to how quickly data can move into and out of the GPU. More is generally better, but again, AMD and Nvidia have different architectures and sometimes different memory bandwidth requirements, so these numbers are not directly comparable.
Memory type is also an important factor of your next GPU purchase, and knowing which type you’re buying into is critical depending on the types of games you play or programs you plan to run, though you won’t really have a choice within a card line.
High-Bandwidth Memory 2 (HBM2): AMD went all-in on HBM2 with the release of the AMD Radeon VII, a card that can technically keep up with an RTX 2080 in gaming, but also packs in a whopping 16GB of HBM2 VRAM for content creators. HBM2 is actually preferred for a particular subset of workloads like video editing in Adobe Premiere Pro, and was favored by AMD for its cheaper manufacturing cost. It’s since fallen out of favor to GDDR6.
GDDR6: Considered the workhorse of the modern GPU, GDDR is a memory type that lives in almost every card released in the past decade, from the RTX 2080 Ti and RTX Titan all the way down to the AMD Radeon RX 5600 XT. The latest version, GDDR6, is a reliable, highly tunable VRAM solution that fits every SKU of the market, and often provides more than enough horsepower for the price to handle even the most demanding AAA games at high resolution.
GDDR6X: Nvidia has begun employing this new type of memory in its GeForce RTX 30 Series. It effectively doubles the available bandwidth of the original GDDR6 design, all without running into the same problems of signal degradation or path interference that previous iterations would need to account for. This is the newest of the new for Nvidia, but it’s been speculated that AMD’s Big Navi may not go this direction, and stick with a more classic implementation of GDDR6 instead.
Upgrading a Pre-Built Desktop With a New Graphics Card
Assuming the chassis is big enough, most pre-built desktops these days have enough cooling capability to accept a new discrete GPU with no problems.
The first thing to do before buying or upgrading a GPU is to measure your chassis for the available card space. In some cases, you’ve got a gulf between the far right-hand edge of the motherboard and the hard drive bays. In others, you might have barely an inch to spare on the total length of your GPU. Really long cards can present a problem in some smaller cases. (See our favorite graphics cards for compact PCs.)
Next, check your graphics card’s height. The card partners sometimes field their own card coolers that depart from the standard AMD and Nvidia reference designs. Make certain that if your chosen card has an elaborate cooler design, it’s not so tall that it keeps your case from closing.
Finally: the power supply unit (PSU). Your system needs to have a PSU that’s up to the task of giving a new card enough juice. This is something to be especially wary of if you’re putting a high-end video card in a pre-built PC that was equipped with a low-end card, or no card at all. Doubly so if it’s a budget-minded or business system; these PCs tend to have underpowered or minimally provisioned PSUs.
The two most important factors to be aware of here are the number of six-pin and eight-pin cables on your PSU, and the maximum wattage the PSU is rated for. Most modern systems, including those sold by OEMs like Dell, HP, and Lenovo, employ power supplies that include at least one six-pin power connector meant for a video card, and some have both a six-pin and an eight-pin connector.
Midrange and high-end graphics cards will require a six-pin cable, an eight-pin cable, or some combination of the two to provide working power to the card. (The lowest-end cards draw all the power they need from the PCI Express slot.) Make sure you know what your card needs in terms of connectors.
We’ve seen some changes here in 2020, as the RTX 3080 Founders Edition requires a special adapter (it comes in the box) to turn two eight-pin PSU connectors into a single 12-pin one card-side, and the massive 12.7-inch MSI GeForce RTX 3080 Gaming X Trio now requires a whopping three eight-pin connectors to suck down its required juice.
Nvidia and AMD both outline recommended power supply wattage for each of their graphics-card families. Take these guidelines seriously, but they are just guidelines, and they are generally conservative. If AMD or Nvidia says you need at least a 500-watt PSU to run a given GPU, don’t chance it with the 300-watter you may have installed, but know that you don’t need an 800-watt PSU to guarantee enough headroom, either.
SLI, CrossFire, and NVLink: Fading for Gamers
Over the past few generations, both AMD and Nvidia have been moving away from support for dual-, tri-, or even quad-card setups. These have traditionally been a not-so-cheap, but somewhat easy, way to maximize performance. But the value proposition (for PC gamers, specifically) just isn’t there anymore.
In our testing at PC Labs last year with twin RTX 2080 Ti cards, we found that adding two cards to the mix provided, well…mixed results, to put it mildly. Most games these days aren’t written to leverage two or more cards, and those that do don’t see performance scale up in parallel. (SLI and NVLink are Nvidia’s twin-card technologies; CrossFire is AMD’s.) Some games actually run worse; it’s all down to engine optimization.
For content creation tasks, though, it’s a different story. There’s a reason why the GeForce RTX 3090 is the only card in Nvidia’s new-for-2020 lineup that supports any kind of NVLink card-pairing: Pro-level creators are the only ones who will get enough use out of it to get a satisfactory return on investment.
Bottom line? In almost all cases nowadays, you’ll be best served by buying the single best card you can afford, rather than buying one lesser card now, planning to have another join it later.
Ports and Preferences: What Connections Should My Graphics Card Have?
Three kinds of port are common on the rear edge of a current graphics card: DVI, HDMI, and DisplayPort. Some systems and monitors still use DVI, but it’s the oldest of the three standards and no longer appears on high-end cards these days.
Most cards have several DisplayPorts (often three) and one HDMI port. When it comes to HDMI versus DisplayPort, note some differences. First, if you plan on using a 4K display, now or in the future, your card needs to at least support HDMI 2.0a or DisplayPort 1.2/1.2a. It’s fine if the GPU supports anything above those labels, like HDMI 2.0b or DisplayPort 1.4, but that’s the minimum you’ll want for smooth 4K playback or gaming. (The latest-gen cards from both makers will be fine on this score.)
HDMI 2.1 is a new cable spec compatible with all of Nvidia’s GeForce RTX 30 Series cards, which ups the old bandwidth limits from 18Gbps (in 2.0) to 48Gbps (in 2.1). The upgrade also enables 8K resolution to display at a refresh rate up to 60Hz, with 4K supported up to 120Hz.
Note that some of the cards from Nvidia in its GeForce RTX series employ a new port, called VirtualLink. This port looks like (and can serve as) a USB Type-C port that also supports DisplayPort over USB-C. What the port is really designed for, though: attaching future generations of virtual-reality (VR) headsets, providing power and bandwidth adequate to the needs of VR head-mounted displays (HMDs). It’s nice to have, but no VR hardware supports it yet, and it’s not on the Founders Edition cards of RTX 30 Series, so its future is in doubt.
Looking Forward: Graphics Card Trends
Nvidia has been in the consumer video card driver’s seat for a few years now, but late 2020 and 2021 should see more card action than any years in recent memory, shaking things up between the two big players.
Nvidia GeForce Vs. AMD Radeon: Looking Ahead
If your goal is a high-end graphics card (we define that, these days, as cards at $499 or more) for playing games at 4K, and you plan to use the card for three to five years, the upper end of the market is mostly Nvidia’s world at the moment. But that could shift as 2020 progresses, with AMD’s next-generation Big Navi cards expected to roll out soon. Based on the same 7nm manufacturing process as the first Navis but with an upgrade to the RDNA architecture, these cards could change AMD’s fortunes in the high-end graphics market, or at least make the company more competitive.
For now, the Radeon VII, its first 7nm-built video card, is a competent offering for 1440p/4K play and content creators, but it doesn’t quite topple even the RTX 2080 and newer GeForce RTX 2080 Super in most respects. (See our face-off AMD Radeon VII vs. Nvidia RTX 2080: Which High-End Gaming Card to Buy?) AMD has a tough road ahead on the high end given the strength of the GeForce RTX 3080.
Image Sharpening Tech, and Nvidia DLSS
Another major change to the landscape of gaming over the last year or so has been the addition of image-sharpening technologies: Radeon Image Sharpening (RIS) by AMD, FidelityFX with CAS (also by AMD), and Freestyle from Nvidia. But what are these programs, exactly, and how do they help gamers who are shopping on a budget?
It all has to do with something called “render scaling.” In most modern games, you’ve likely seen something in your graphics settings that lets you change the render scale of a game. In essence, what this does is take the current resolution you have the game set to (in this example, let’s say it’s 2,560 by 1,440 pixels), and pushes the “render” resolution down by a particular percentage, perhaps down to 2,048 by 1,152 pixels (again, for the sake of this example).
But wait…who would make their game look worse on purpose? Users of game sharpeners, that’s who. Image-sharpening technologies let you scale down a game’s render resolution, thereby increasing the frame rate (lower resolutions mean less pixels to draw for the GPU, and thereby less muscle needed), while a sharpener cleans things up on the back end for a modest performance cost.
“Cleaning things up” involves applying a sharpening filter to the downsampled image, and if you can tune it just right (85 percent down-render with a 35 percent sharpen scale is a popular ratio), in theory you can gain a significant amount of performance with little discernible loss in visual clarity. Why is this important? If you can render your game down without losing visual quality, ultimately this means you can render down the impact of the card you need to buy on your wallet, too.
We’ve pushed image-sharpening technologies to their limit, and in our testing found that the peak down-sample is about 30 percent. This means you can buy a card that’s nearly a third cheaper than the one you were originally looking at, sharpen it back up 30 percent using one of the aforementioned sharpening tools, and still get close to the same high-definition gaming experience you would expect from running a game at its native resolution without render scaling.
On a related note, DLSS, short for “deep-learning supersampling,” is Nvidia’s new solution to a problem as old as 3D-capable video cards themselves: how to smooth out the polygons around the edge of a character or object with as little performance impact as possible. Anti-aliasing, as it’s better known, is one of the most computationally difficult tasks for a graphics card to work through in video games, and since the technology’s inception, a wide array of approaches have used all manner of math to achieve the same goal: make the jagged thing smoother.
In the case of DLSS, Nvidia employs artificial intelligence to help. But for now, DLSS comes at a premium (i.e., it requires a GeForce RTX card) because like ray-tracing, it can’t be done on just any ol’ CUDA core: It has to happen on a specialized graphics core known as a Tensor core. What the RT core is to ray-tracing, the Tensor core is to decoding complex equations provided by Nvidia’s artificial intelligence neural network.
DLSS and its follow-on, DLSS 2.0, show great promise; the main problem is that so few games support the tech. In our testing at PC Labs, we found that one of the highest-profile DLSS-capable games, Death Stranding, always benefitted from the use of either DLSS or CAS. But in the case of DLSS specifically, the visual quality was actually improved, whereas CAS got it perhaps 90 percent of the way there with some visible jitters that would appear while characters were in motion.
If more games adopt it, it could be a huge boon to GeForce RTX owners. But the “if” is the big word there.
VR: New Interfaces, New HMDs?
As we alluded to with VirtualLink, VR is another consideration. VR’s requirements are slightly different than those of simple monitors. Both of the mainstream VR HMDs, the original HTC Vive and Oculus Rift, have an effective resolution across both eyes of 2,160 by 1,200. That’s significantly lower than 4K, and it’s the reason why midrange GPUs like AMD’s Radeon RX 5700 XT or Nvidia’s GeForce GTX 1660 Super can be used for VR. On the other hand, VR demands higher frame rates than conventional gaming. Low frame rates in VR (anything below 90 frames per second is considered low) can result in a bad VR gaming experience. Higher-end GPUs in the $300-plus category are going to offer better VR experiences today and more longevity overall, but VR with current-generation headsets can be sustained on a lower-end card than 4K.
That said, in 2019, two new headsets upped the power requirements a bit. The Oculus Rift S has raised the bar to a resolution of 2,560 by 1,440 pixels per eye, while the mega-enthusiast-level $1,000 Valve Index has pumped its own respective numbers up to 1,440 by 1,600 pixels per eye, or 2,880 by 3,200 pixels in total.
If you decide to splurge on one of these newer headsets, you’ll need a graphics card that can keep up with their intense demands (80Hz refresh on the Rift S, and 144Hz refresh on the Index). This means a system that can run a graphically intensive title like Half-Life: Alyx at 144fps on two displays at once, above 1080p. Valve recommends at least a GeForce RTX 2070 Super or an AMD RX 5700 XT for the best experience. In short, check the specs for any headset you are considering and follow the GPU recommendations scrupulously. Substandard VR is no VR at all, just a headache.
So, Which Graphics Card Should I Buy?
In the second half of 2020, that answer is more convoluted than ever before. New tech is changing the way that games interact with GPUs, and that evolution will only continue to muddy the waters at the top end while keeping both the midrange and low end in turmoil, thanks to image-sharpening-backed considerations (and possible discounts).
Speaking of the top end, right now neither the Nvidia GeForce RTX 3080 nor the GeForce RTX 3090 can be beat by anything AMD has on offer, so if you’re looking for the most power for power’s sake, Nvidia is your go-to. That said, availability on these cards is very, very tight, with no relief in sight.
AMD, on the other hand, is still holding strong in the value-oriented midrange market, but its driver implementations, in some games, continue to plague the line a year after the launch of the RX 5700 and RX 5700 XT. That should give some buyers pause.
Further down the line, Nvidia and AMD butt heads regularly, each with their own answers to sub-$1,000 PC builders that span a huge number of different card models and available options.
The GPUs below span the spectrum of budget to high end, representing the full range of the best cards that are available now. We update this story as the graphics-card landscape changes, so check back often for the latest products and buying advice.
Note that we’ve factored in just a sampling of third-party cards here; many more fill out the market. You can take our recommendation of a single reference card in a given card class (like the GeForce RTX 2060 Super, or Radeon RX 5700 XT) as an endorsement of the GPU family as a whole.