GPU Performance Benchmarks

As part of today’s announcement of the Tegra X1, NVIDIA also gave us a short opportunity to benchmark the X1 reference platform under controlled circumstances. In this case NVIDIA had several reference platforms plugged in and running, pre-loaded with various benchmark applications. The reference platforms themselves had a simple heatspreader mounted on them, intended to replicate the ~5W heat dissipation capabilities of a tablet.

The purpose of this demonstration was two-fold. First to showcase that X1 was up and running and capable of NVIDIA’s promised features. The second reason was to showcase the strong GPU performance of the platform. Meanwhile NVIDIA also had an iPad Air 2 on hand for power testing, running Apple’s latest and greatest SoC, the A8X. NVIDIA has made it clear that they consider Apple the SoC manufacturer to beat right now, as A8X’s PowerVR GX6850 GPU is the fastest among the currently shipping SoCs.

It goes without saying that the results should be taken with an appropriate grain of salt until we can get Tegra X1 back to our labs. However we have seen all of the testing first-hand and as best as we can tell NVIDIA’s tests were sincere.

NVIDIA Tegra X1 Controlled Benchmarks
Benchmark A8X (AT) K1 (AT) X1 (NV)
BaseMark X 1.1 Dunes (Offscreen) 40.2fps 36.3fps 56.9fps
3DMark 1.2 Unlimited (Graphics Score) 31781 36688 58448
GFXBench 3.0 Manhattan 1080p (Offscreen) 32.6fps 31.7fps 63.6fps

For benchmarking NVIDIA had BaseMark X 1.1, 3DMark Unlimited 1.2 and GFXBench 3.0 up and running. Our X1 numbers come from the benchmarks we ran as part of NVIDIA’s controlled test, meanwhile the A8X and K1 numbers come from our Mobile Bench.

NVIDIA’s stated goal with X1 is to (roughly) double K1’s GPU performance, and while these controlled benchmarks for the most part don’t make it quite that far, X1 is still a significant improvement over K1. NVIDIA does meet their goal under Manhattan, where performance is almost exactly doubled, meanwhile 3DMark and BaseMark X increased by 59% and 56% respectively.

Finally, for power testing NVIDIA had an X1 reference platform and an iPad Air 2 rigged to measure the power consumption from the devices’ respective GPU power rails. The purpose of this test was to showcase that thanks to X1’s energy optimizations that X1 is capable of delivering the same GPU performance as the A8X GPU while drawing significantly less power; in other words that X1’s GPU is more efficient than A8X’s GX6850. Now to be clear here these are just GPU power measurements and not total platform power measurements, so this won’t account for CPU differences (e.g. A57 versus Enhanced Cyclone) or the power impact of LPDDR4.

Top: Tegra X1 Reference Platform. Bottom: iPad Air 2

For power testing NVIDIA ran Manhattan 1080p (offscreen) with X1’s GPU underclocked to match the performance of the A8X at roughly 33fps. Pictured below are the average power consumption (in watts) for the X1 and A8X respectively.

NVIDIA’s tools show the X1’s GPU averages 1.51W over the run of Manhattan. Meanwhile the A8X’s GPU averages 2.67W, over a watt more for otherwise equal performance. This test is especially notable since both SoCs are manufactured on the same TSMC 20nm SoC process, which means that any performance differences between the two devices are solely a function of energy efficiency.

There are a number of other variables we’ll ultimately need to take into account here, including clockspeeds, relative die area of the GPU, and total platform power consumption. But assuming NVIDIA’s numbers hold up in final devices, X1’s GPU is looking very good out of the gate – at least when tuned for power over performance.

Tegra X1's GPU: Maxwell for Mobile Automotive: DRIVE CX and DRIVE PX
Comments Locked

194 Comments

View All Comments

  • Jumangi - Monday, January 5, 2015 - link

    Apple would never use Nvidia at the power consumption levels it brings. The power is pointless to them if it can't be put into a smartphone level device. Nvidia still doesn't get why nobody in the OEM market wants their tech for a phone.
  • Yojimbo - Monday, January 5, 2015 - link

    But the NVIDIA SOCs are on a less advanced process node, so how can you know that? You seem to be missing the whole point. The point is not what Apple wants or doesn't want. The point is to compare NVIDIA's GPU architecture to the PowerVR series 6XT GPU. You cannot directly compare the merits of the underlying architecture by comparing performance and power efficiency when the implementations are using different sized transistors. And the question is not the level of performance and power efficiency Apple was looking for for their A8. The question is simply peak performance per watt for each architecture.
  • OreoCookie - Tuesday, January 6, 2015 - link

    @Yojimbo
    The Shield was released with the Cortex A15-based Tegra K1, not the Denver-based K1. The former is not competitive with regards to CPU performance, the latter plays in the same league. AFAIK the first Denver-based K1 product was the Nexus 9. Does anyone know of any tablets which use the Denver-based K1?
  • lucam - Wednesday, January 7, 2015 - link

    Apple sell products that have an year life cycle, don't sell chips and therefore they don't need to do any marketing in advance as NV does punctually at any CES.
  • TheJian - Monday, January 5, 2015 - link

    It's going finfet 16nm later this year (parker). As noted here it's NOT in this chip due to time to market and probably not as much gained by shrinking that to 20nm vs. going straight to 16nm finfet anyway. Even Qcom went off the shelf for S810 again for time to market.

    Not sure how you get that Denver is a disappointment. It just came out...LOL. It's a drop in replacement for anyone using K1 32bit (pin compatible), so I'm guessing we'll see many more devices pop up quicker than the first rev, but even then it will have a short life due to X1 and what is coming H2 with Denver yet again (or an improved version).

    What do you mean K1 is in ONE device? You're kidding right? Jeez, just go to amazon punch Nvidia K1 into the search. Acer, HP, NV shield, Lenovo, Jetson, Nexus9, Xiaomi (mipad not sold on amazon but you get the point)...The first 4 socs were just to get us to desktop gpu. The real competition is just starting.

    Building the cpu wasn't just for mobile either. You can now go after desktops/higher end notebooks etc with NO WINTEL crap in them and all the regular PC trimmings (high psu, huge fan/heatsink, hd's, ssd's etc etc, discrete gpu if desired, 16-32GB of ram etc). All of this timed perfectly with 64bit OS getting polished up for MUCH more complicated apps etc. The same thing that happened to low-end notebooks with chromebooks, will now happen with low end PC's at worst and surely more later as apps advance on android etc and Socs move further up the food chain in power and start running desktop models at 4ghz with fan/heatsinks (with a choice of discrete gpu when desired). With no Wintel Fee (copy of windows + Intel cpu pricing), they will be great for getting poor people into great gaming systems that do most of what they'd want otherwise (internet, email, docs, media consumption). I hope they move here ASAP, as AMD is no longer competition for Intel CPU wise.

    Bring on the ARM full PC like box! Denver was originally supposed to be x86 anyway LOL. Clearly they want in on Intel/AMD cpu territory and why not at CPU vs. SOC pricing? NV could sell an amped up SOC at 4ghz for $110/$150 vs. Intel's top end i5/i7's ($229/339). A very powerful machine for $200 less cash but roughly ~perf (when taking out the Windows fee also, probably save $200 roughly). Most people in this group won't miss the windows apps (many won't even know what windows is, grew up on a phone/tablet etc). Developing nations will love these as apps like Adobe Suite (fully featured) etc get moved making these cheap boxes powerful content creators and potent gamers (duh, NV gpu in them). If they catch on in places like USA also, Wintel has an even bigger headache and will need to drop pricing to compete with ARM and all it's ecosystem brings. Good times ahead in the next few years for consumers everywhere. This box could potentially run android, linux, steamos, chrome in a quadboot giving massive software options etc at a great price for the hardware. Software for 64bit on Arm will just keep growing yearly (games and adv apps).
  • pSupaNova - Tuesday, January 6, 2015 - link

    Agree totally with your post, NVdia did try to put good mobile chips in netbooks with the ION & ION2 and Intel blocked them.

    Good to see that they have stuck at the job and now are in the position to starting eating Intels lunch.
  • darkich - Monday, January 5, 2015 - link

    That's just not true.

    The K1 has shipped in three high end Android Tablets - Nvidia shield, Xiaomi MiPad, and Nexus 9.

    Now, how many tablets got a Snapdragon 805?
    Exynos 5433?

    Tegra K1 market performance is simply the result of the fact that high end tablet market is taken up by Apple, and that it doesn't compete in mod range and low end.
  • darkich - Monday, January 5, 2015 - link

    *mid range
  • GC2:CS - Monday, January 5, 2015 - link

    It's the result of too high power compustion, that OEM's prefer to keep low.

    That's why tegra K1 is used by just foolish chinesse manufacteurs (like tegra 4 in a phone) like xiaomi, google in a desperate need for a non Apple high end 64-bit chip (to showcase how much it's 64-bit) and nvidia themselves.
  • Yojimbo - Monday, January 5, 2015 - link

    I think you're right that the K1 is geared more towards performance than other SOCs. The K1 does show good performance/watt, but it does so with higher performance, using more watts. And you're right that most OEMs have preferred a lower power usage. But it doesn't mean that the K1 is a poor SOC. NVIDIA is trying to work towards increasing the functionality of the platform by allowing it to be a gaming platform. That is their market strategy. It is probably partially their strategy because those are the tools they have available to them; that is their bread-and-butter. But presumably they also think mobile devices can really be made into a viable gaming platform. Thinking about it in the abstract it seems to be obvious... Mobile devices should at some point become gaming platforms. NVIDIA is trying to make this happen now.

Log in

Don't have an account? Sign up now