When Nvidia launched its all-new Pascal architecture last year with the GTX 1080 and GTX 1070, gamers the world over waited for AMD to respond, and now it finally has with the Radeon RX Vega. We’ve known this was coming for a long time, as AMD has been talking about it and hyping it for seemingly forever, and now the time for talking is over because we finally have all the information we need to render our verdict on Fiji’s successor. Let’s dive in.
AMD is coming to market with two GPUs based on its 14nm Polaris 10 architecture; the Vega 64 and the Vega 56, priced at $499 and $399 respectively. AMD is also introducing a new GPU + bonuses bundle called Radeon Packs, which you can read about here. Essentially if you spend $100 more on the GPU you can get discounts on FreeSync monitors, Ryzen CPUs, and games. It's a fun idea, and if you're in the market for an AMD GPU you should get a FreeSync monitor to go with it, so be sure to check them out. They're kind of convoluted with how they work so we'll spare you the details here, but just follow the link above for more details.
The new Radeons RX Vega GPUs are the flagship gaming models of the company’s all-new Polaris architecture, which first debuted with the RX 480 and other midrange GPUs. Now AMD has bumped up everything, going from a 232mm (squared) die on the RX 480 to a 486mm (squared) die on these two GPUs. For comparison, the GP104 die used in the GTX 1080/1070 is 314mm (squared), so Vega is quite a bit larger. It’s even bigger than the GTX 1080 Ti’s 471mm (squared) die. Both of these GPUs are competing directly with the GTX 1080/1070, as neither of them has the brawn to go up against the GTX 1080 Ti, which seemingly will remain the king until the next round when AMD's Navi arrives at some point in the future, but at that point Nvidia will have Volta ready too, so around and around we go.
Both Radeon Vega GPUs have the same basic specs, though the 64 is clocked higher, has a higher memory clock, and more streaming processors. It also consumes more 85w more power. Both of the GPUs I received for testing looked identical in every way, and are typical Radeon reference design boards with illuminated Radeon logos and dual eight-pin power connectors.
Given the GPUs’ high power requirements, at least relative to their Nvidia counterparts, AMD has implemented a few technologies to reduce power consumption. The reference cards I received featured a switch on the GPU itself that allows you to switch to an alternate VBIOS that shaves between 15-30w off its power consumption depending on which power profile you select. Yes, there are three power profiles available in the Radeon software; Power Saving, Balanced, and Turbo. There’s also Radeon Chill, which limits on-screen action when needed to keep temperatures down.
The big news with these GPUs is that they employ what AMD called a High Bandwidth Cache, which is comprised of 8GB of second-gen High Bandwidth Memory, aka HMB2. Instead of the usual 256-bit or 384-bit path to the GPU this memory sits on top of the die via an interposer, and has an incredible 2048-bit wide path. This allows the memory to run at much slower speeds than the GDDR5 used by Nvidia while achieving similar results. If you’re concerned that 8GB of memory isn’t enough, and you should be if you want to run games at 4k resolution, AMD allows you to allocate system memory that is pooled with the onboard HBM2 via a slider in the software. An AMD rep I spoke with about it called it a "forward looking feature," noting currently it won't offer much benefit. By default it seemed to automatically allocate half my test system's memory, and there was no way to fine-tune it to just use one or two gigabytes, so it seems kind of beta still.
AMD is also introducing a new display refresh technology called Enhanced Sync which seems like it's designed for folks without a FreeSync display, but AMD notes it is compatible with refresh-sync technologies. According to AMD if your game's framerate exceeds the display's refresh rate (typically 60Hz) it allows the framerate to stay where it is instead of lowering it to 60Hz, and shows the last completed frame on each interval, theoretically reducing input lag. If your game dips below the refresh rate it will disable V-sync and allow tearing, again to reduce input lag. You can set it globally for all games or per-game via the Radeon software.
Both the Vega 64 and 56 reference cards I received are identical, and are 10.5" long and take up two PCI Express slots. Both card have three DisplayPort and one HDMI port. Both cards also feature a strip of LEDs above the PCIe connectors that display the current GPU load. They are labeled GPU Tach and go from one light illuminated when the card is idle to all of them lit up when the GPU is at full throttle. It looks kind of cool, but I sort of wish they'd dance a little bit like the LEDs on RAM.
To see what the Vega GPUs are capable of we tossed them into the IGN test bench, which was built with an Intel Core i7-7700K (non-overclocked) CPU, 8GB of DDR4, an Asus Prime Z270 motherboard, and an Intel SSD. We compared them to the GTX 1080 and GTX 1070, and also threw in some 1080 Ti benchmarks too since you're probably curious about that. All tests were run at three resolutions with the maximum in-game settings possible without any anti-aliasing. I set both Radeon GPUs to their "Turbo" setting for maximum performance.
I was the one who reviewed the Radeon Fury X, and the Vega 64 certainly feels like it's in the same position as that ill-fated GPU. It was only able to beat the GTX 1080 in in 3DMark and Batman: Arkham Knight, but trailed by a decent margin in the rest of the tests, which is disappointing. Though this is obviously not an exhaustive benchmark suite with 20+ games, given what we've seen previously with the Frontier Edition, it's doubtful there's a silver bullet game or driver that will miraculously catapult the Vega 64 into a position of prominence all of a sudden. All available evidence shows the Vega 64 can't quite match the GTX 1080 in pure performance, and this is even more puzzling due to the fact that it's using a lot more power. The reference design Radeon Vega 64 ran hot and loud too, reminiscent of Hawaii in fact, and was a toasty 85C throughout testing. It ran at 1,630MHz when set to Turbo mode in WattMan.
Its definitely the fastest current Radeon GPU and it is awesome for 2560x1440p content, but then again so is the GTX 1080. It's close, but the advantage is clearly in Nvidia's court in this round. The saving grace is if you toss a FreeSync monitor into the equation, you should have a much less expensive setup than one with an Nvidia card and a G-Sync monitor, since those tend to run about $200 more expensive than their AMD counterparts.
The Vega 56 is a much more compelling GPU than the Vega 64 simply because it is a better match for the GTX 1070 in terms of performance, power consumption aside. To illustrate this battle let's take a look at just these two GPUs at the three resolutions.
As you can see these two GPUs are extremely close to one another, and were trading blows throughout testing. It was such a back-and-forth that it's difficult to say one is categorically better than the other unless you bring power consumption into the equation. If you do that then clearly the GTX 1070 is doing more with less, but personally I don't think desktop users care too much about power consumption unless it leads to things like excessive heat and noise. On this front the Radeon 56 generally ran at 1,538MHz in Turbo mode and operated at 78C under load after several hours. I'd say it was a bit noisy but less so than the Radeon 56. Generally speaking all reference designs run hot and are a bit loud, which is why we always advise to wait for the partner boards to arrive since they solve these issues.