GPU Performance Benchmarks

As part of today’s announcement of the Tegra X1, NVIDIA also gave us a short opportunity to benchmark the X1 reference platform under controlled circumstances. In this case NVIDIA had several reference platforms plugged in and running, pre-loaded with various benchmark applications. The reference platforms themselves had a simple heatspreader mounted on them, intended to replicate the ~5W heat dissipation capabilities of a tablet.

The purpose of this demonstration was two-fold. First to showcase that X1 was up and running and capable of NVIDIA’s promised features. The second reason was to showcase the strong GPU performance of the platform. Meanwhile NVIDIA also had an iPad Air 2 on hand for power testing, running Apple’s latest and greatest SoC, the A8X. NVIDIA has made it clear that they consider Apple the SoC manufacturer to beat right now, as A8X’s PowerVR GX6850 GPU is the fastest among the currently shipping SoCs.

It goes without saying that the results should be taken with an appropriate grain of salt until we can get Tegra X1 back to our labs. However we have seen all of the testing first-hand and as best as we can tell NVIDIA’s tests were sincere.

NVIDIA Tegra X1 Controlled Benchmarks
Benchmark A8X (AT) K1 (AT) X1 (NV)
BaseMark X 1.1 Dunes (Offscreen) 40.2fps 36.3fps 56.9fps
3DMark 1.2 Unlimited (Graphics Score) 31781 36688 58448
GFXBench 3.0 Manhattan 1080p (Offscreen) 32.6fps 31.7fps 63.6fps

For benchmarking NVIDIA had BaseMark X 1.1, 3DMark Unlimited 1.2 and GFXBench 3.0 up and running. Our X1 numbers come from the benchmarks we ran as part of NVIDIA’s controlled test, meanwhile the A8X and K1 numbers come from our Mobile Bench.

NVIDIA’s stated goal with X1 is to (roughly) double K1’s GPU performance, and while these controlled benchmarks for the most part don’t make it quite that far, X1 is still a significant improvement over K1. NVIDIA does meet their goal under Manhattan, where performance is almost exactly doubled, meanwhile 3DMark and BaseMark X increased by 59% and 56% respectively.

Finally, for power testing NVIDIA had an X1 reference platform and an iPad Air 2 rigged to measure the power consumption from the devices’ respective GPU power rails. The purpose of this test was to showcase that thanks to X1’s energy optimizations that X1 is capable of delivering the same GPU performance as the A8X GPU while drawing significantly less power; in other words that X1’s GPU is more efficient than A8X’s GX6850. Now to be clear here these are just GPU power measurements and not total platform power measurements, so this won’t account for CPU differences (e.g. A57 versus Enhanced Cyclone) or the power impact of LPDDR4.

Top: Tegra X1 Reference Platform. Bottom: iPad Air 2

For power testing NVIDIA ran Manhattan 1080p (offscreen) with X1’s GPU underclocked to match the performance of the A8X at roughly 33fps. Pictured below are the average power consumption (in watts) for the X1 and A8X respectively.

NVIDIA’s tools show the X1’s GPU averages 1.51W over the run of Manhattan. Meanwhile the A8X’s GPU averages 2.67W, over a watt more for otherwise equal performance. This test is especially notable since both SoCs are manufactured on the same TSMC 20nm SoC process, which means that any performance differences between the two devices are solely a function of energy efficiency.

There are a number of other variables we’ll ultimately need to take into account here, including clockspeeds, relative die area of the GPU, and total platform power consumption. But assuming NVIDIA’s numbers hold up in final devices, X1’s GPU is looking very good out of the gate – at least when tuned for power over performance.

Tegra X1's GPU: Maxwell for Mobile Automotive: DRIVE CX and DRIVE PX
Comments Locked

194 Comments

View All Comments

  • Morawka - Monday, January 5, 2015 - link

    omg i'm gonna go buy some nvidia stock now. Not because of the X1, but because of the Automotive platforms.
  • iwod - Monday, January 5, 2015 - link

    That is some impressive GPU performance / watt. However I think LPDDR4 with double the bandwidth do help the performance on X1. But even with the accounted difference the A8X GPU still does not hold up against Maxwell, assuming Nvidia benchmarks can be trusted.
    It should be noted that the A8X is partly a custom GPU from Apple. Since it doesn't come directly off IMG, and it is likely not as power efficient as possible.
  • junky77 - Monday, January 5, 2015 - link

    Where's AMD in all this thing..
  • chizow - Monday, January 5, 2015 - link

    They're not in the discussion, blame Dirk "Not Interested in Netbooks" Meyer for that one.
  • junky77 - Monday, January 5, 2015 - link

    :(
    But all the other stuff showed here - vehicles and stuff (not that I think there will be a good AI in 2017, but still)
  • GC2:CS - Monday, January 5, 2015 - link

    This chip looks awesome, but so was all tegras before.

    Like the tegra k1, a huge annoncement supposed to bring "revolution" to mobile graphics computing. That turned out to be a power hog, pulling so much power it was absolutelly unsuitable for any phone and it's also throotling significally.

    This looks like the same story yet aggain, lots of marketing talk, lots of hype, no promise delivered.
  • pSupaNova - Monday, January 5, 2015 - link

    Nothing wrong with the Tegra K1 in both forms, I have a Shield Tablet and Nexus 9.

    I have a Program that I changed so I can run https://www.shadertoy.com/ shaders natively and both tablets are impressively fast.

    Nvidia just need to make sure they run on the same process as Apple and they will have the fastest SOC CPU and GPU wise.
  • techconc - Tuesday, January 6, 2015 - link

    Apple is expected to move to 14nm for the A9. That's just speculation, but given Apple position in the supply chain as opposed to nVidia's I would be surprised if nVidia was able to be on the same process. With regards to CPUs, since nVidia has regressed from the Denver core to the standard reference designs, I wouldn't expect nVidia to have any CPU advantage. Certainly not with single threaded apps anyway. As for the GPU, the Rogue 7 series appears to be more scalable with up to 512 "cores". If the X1 chip has any GPU advantage it would not be for technical reasons. Rather it would be because Apple chose to scale up to that level. Given that Apple has historically chosen rather beefy GPUs, I would again be surprised if they allowed the X1 to have a more powerful GPU. We'll see.
  • kron123456789 - Monday, January 5, 2015 - link

    "it's also throotling significally." — Um, no. It has throttling under heavy load but it's about 20% in worst case. It was Snapdragon 800/801 and Exynos 5430 that "throotling significally".
  • jwcalla - Monday, January 5, 2015 - link

    The fact that the announcement for this chip was coordinated with an almost exclusive discussion about automotive applications -- and correct me if I'm wrong, but it does not appear they even discussed gaming or mobile applications, except for the demo -- could be a signal that indicates to which markets NVIDIA wants to focus Tegra and which markets they're abandoning.

    A couple years back Jen-Hsun said that Android was the future of gaming, but I wonder if he still believes that today?

    I do think there is some truth to the idea that there is not much of a consumer market for high-end mobile graphics. Other than making for a great slide at a press event (Apple), there doesn't seem to be much of a use case for big graphics in a tablet. The kind of casual games people play there don't seem to align with nvidia's strengths.

Log in

Don't have an account? Sign up now