3D rendering is a massively parallel problem. Your GPU ultimately has to determine the color value of each pixel which may not remain constant between frames, at a rate of dozens of times per second. The iPad 2 had 786,432 pixels in its display, and by all available measures its GPU was more than sufficient to drive that resolution. The new iPad has 3.14 million pixels to drive. The iPad 2's GPU would not be sufficient.

When we first heard Apple using the term A5X to refer to the new iPad's SoC, I assumed we were looking at a die shrunk, higher clock version of the A5. As soon as it became evident that Apple remained on Samsung's 45nm LP process, higher clocks were out of the question. The only room for improving performance was to go wider. Thankfully, as 3D rendering is a massively parallel problem, simply adding more GPU execution resources tends to be a great way of dealing with a more complex workload. The iPad 2 shocked the world with its dual-core PowerVR SGX 543MP2 GPU, and the 3rd generation iPad doubled the amount of execution hardware with its quad-core PowerVR SGX 543MP4.

Mobile SoC GPU Comparison
  Adreno 225 PowerVR SGX 540 PowerVR SGX 543MP2 PowerVR SGX 543MP4 Mali-400 MP4 Tegra 2 Tegra 3
SIMD Name - USSE USSE2 USSE2 Core Core Core
# of SIMDs 8 4 8 16 4 + 1 8 12
MADs per SIMD 4 2 4 4 4 / 2 1 1
Total MADs 32 8 32 64 18 8 12
GFLOPS @ 300MHz 19.2  GFLOPS 4.8 GFLOPS 19.2 GFLOPS 38.4
GFLOPS As Shipped by Apple/ASUS - - 16 GFLOPS 32 GFLOPS - - 12

We see this approach all of the time in desktop and notebook GPUs. To allow games to run at higher resolutions, companies like AMD and NVIDIA simply build bigger GPUs. These bigger GPUs have more execution resources and typically more memory bandwidth, which allows them to handle rendering to higher resolution displays.

Apple acted no differently than a GPU company would in this case. When faced with the challenge of rendering to a 3.14MP display, Apple increased compute horsepower and memory bandwidth. What's surprising about Apple's move is that the A5X isn't a $600 desktop GPU, it's a sub 4W mobile SoC. And did I mention that Apple isn't a GPU company?

That's quite possibly the most impressive part of all of this. Apple isn't a GPU company. It's a customer of GPU companies like AMD and NVIDIA, yet Apple has done what even NVIDIA would not do: commit to building an SoC with an insanely powerful GPU.

I whipped up an image to help illustrate. Below is a representation, to-scale, of Apple and NVIDIA SoCs, their die size, and time of first product introduction:

If we look back to NVIDIA's Tegra 2, it wasn't a bad SoC—it was basically identical in size to Apple's A4. The problem was that the Tegra 2 made its debut a full year after Apple's A4 did. The more appropriate comparison would be between the Tegra 2 and the A5, both of which were in products in the first half of 2011. Apple's A5 was nearly 2.5x the size of NVIDIA's Tegra 2. A good hunk of that added die area came from the A5's GPU. Tegra 3 took a step in the right direction but once again, at 80mm^2 the A5 was still over 50% larger.

The A5X obviously dwarfs everything, at around twice the size of NVIDIA's Tegra 3 and 33.6% larger than Apple's A5. With silicon, size isn't everything, but when we're talking about similar architectures on similar manufacturing processes, size does matter. Apple has been consistently outspending NVIDIA when it comes to silicon area, resulting in a raw horsepower advantage, which in turns results in better peak GPU performance.

Apple Builds a Quad-Channel (128-bit) Memory Controller

There's another side effect that you get by having a huge die: room for wide memory interfaces. Silicon layout is a balancing act. You want density to lower costs, but you don't want hotspots so you need heavy compute logic to be spread out. You want wide IO interfaces but you don't want them to be too wide because then you'll cause your die area to balloon as a result. There's only so much room on the perimeter of your SoC to get data out of the chip, hence the close relationship between die size and interface width.

Most mobile SoCs are equipped with either a single or dual-channel LP-DDR2 memory controller. Unlike in the desktop/notebook space where a single DDR2/DDR3 channel refers to a 64-bit wide interface, in the mobile SoC world a single channel is 32-bits wide. Both Qualcomm and NVIDIA use single-channel interfaces, with Snapdragon S4 finally making the jump to dual-channel this year. Apple, Samsung, and TI have used dual-channel LP-DDR2 interfaces instead.

With the A5X Apple did the unthinkable and outfitted the chip with four 32-bit wide LP-DDR2 memory controllers. The confirmation comes from two separate sources. First we have the annotated A5X floorplan courtesy of UBMTechInsights:

You can see the four DDR interfaces around the lower edge of the SoC. Secondly, we have the part numbers of the discrete DRAM devices on the opposite side of the motherboard. Chipworks and iFixit played the DRAM lottery and won samples with both Samsung and Elpida LP-DDR2 devices on-board, respectively. While both Samsung and Elpida do a bad job of updating public part number decoders, both strings match up very closely to 216-ball PoP 2x32-bit PoP DRAM devices. The part numbers don't match up exactly, but they are close enough that I believe we're simply looking at a discrete flavor of those PoP DRAM devices.

K3PE4E400M-XG is the Samsung part number for a 2x32-bit LPDDR2 device, K3PE4E400E-XG is the part used in the iPad. I've made bold the only difference.

A cross reference with JEDEC's LP-DDR2 spec tells us that there is an official spec for a single package, 216-ball dual-channel (2x32-bit) LP-DDR2 device, likely what's used here on the new iPad.

The ball out for a 216-ball, single-package, dual-channel (64-bit) LPDDR2 DRAM

This gives the A5X a 128-bit wide memory interface, double what the closest competition can muster and putting it on par with what we've come to expect from modern x86 CPUs and mainstream GPUs.

The Geekbench memory tests show no improvement in bandwidth, which simply tells us that the interface from the CPU cores to the memory controller hasn't seen a similar increase in width.

Memory Bandwidth Comparison—Geekbench 2
  Apple iPad (3rd gen) ASUS TF Prime Apple iPad 2 Motorola Xyboard 10.1
Overall Memory Score 821 1079 829 1122
Read Sequential 312.0 MB/s 249.0 MB/s 347.1 MB/s 364.1 MB/s
Write Sequential 988.6 MB/s 1.33 GB/s 989.6 MB/s 1.32 GB/s
Stdlib Allocate 1.95 Mallocs/sec 2.25 Mallocs/sec 1.95 Mallocs/sec 2.2 Mallocs/sec
Stdlib Write 2.90 GB/s 1.82 GB/s 2.90 GB/s 1.97 GB/s
Stdlib Copy 554.6 MB/s 1.82 GB/s 564.5 MB/s 1.91 GB/s
Overall Stream Score 331 288 335 318
Stream Copy 456.4 MB/s 386.1 MB/s 466.6 MB/s 504 MB/s
Stream Scale 380.2 MB/s 351.9 MB/s 371.1 MB/s 478.5 MB/s
Stream Add 608.8 MB/s 446.8 MB/s 654.0 MB/s 420.1 MB/s
Stream Triad 457.7 MB/s 463.7 MB/s 437.1 MB/s 402.8 MB/s

Although Apple designed its own memory controller in the A5X, you can see that all of these A9 based SoCs deliver roughly similar memory performance. The numbers we're showing here aren't very good at all. Even though Geekbench has never been good at demonstrating peak memory controller efficiency to begin with, the Stream numbers are very bad. ARM's L2 cache controller is very limiting in the A9, something that should be addressed by the time the A15 rolls around.

Firing up the memory interface is a very costly action from a power standpoint, so it makes sense that Apple would only want to do so when absolutely necessary. Furthermore, notice how the memory interface moved from being closer to the CPU in A4/A5 to being adjacent to the GPU in the A5X. It would appear that only the GPU has access to all four channels.

The A5X SoC A Word on Packaging & Looking Forward


View All Comments

  • jjj - Wednesday, March 28, 2012 - link

    Testing battery life only in web browsing ? Maybve that would be ok for a 100$ device.As it is the battery tests are prety poor,you do video playback when every SoC out there has a dedicated decode unit and that test is only representative for vid playback.Here the most important test should have been battery life when both GPU and CPU are loaded and not including that seems like an intentional omission to avoid makiing the device look bad.
    There are a lot of other things to say about the review,too many but one thing has to be said.
    This is a plan B or plan C device.The screen is the selling point,is what had to go in,they didn't had 28/32nm in time and had to go for a heavier,thicker,hotter device with a huge chip (CPU speed is limited most likely by heat not so much power consumption,ofc both are directly related).Apple had to make way too many compromises to fit in the screen,no way this was plan A.
  • tipoo - Thursday, March 29, 2012 - link

    I would have liked a gaming battery life test as well. Reply
  • PeteH - Thursday, March 29, 2012 - link

    Beyond even that, I'd like to see a worst-case battery life (i.e. gaming, max brightness, LTE up, etc).

    Also, it'd be really interesting to see how brightness impacts battery life. Maybe the web browsing test at 20%, 40%, 60%, 80%, and 100% brightness. Of course that would probably delay the review by several days, so it might not be worth it.
  • Anand Lal Shimpi - Thursday, March 29, 2012 - link

    We did a max brightness test, however a gaming test would be appropriate as well. I will see if I can't run some of that in the background while I work on things for next week :)

    Take care,
  • SimpleLance - Wednesday, March 28, 2012 - link

    The biggest drain for the battery comes from the display. So, if the iPad will be used for hotspot only (with display turned off), you will get a lot of hours from it because it has such a huge battery.

    But then, using the the iPad just for a hotspot would be a waste of that gorgeous display.

    Very nice review of a very nice product.
  • thrawn3 - Wednesday, March 28, 2012 - link

    Am I the only one that feels the max brightness is more important in day to day use of a highly portable device than DPI and color accuracy?
    I absolutely would love to have all of these three be excellent but I think for a tablet or small laptop Max Brightness and DPI are higher priority than color accuracy. This is exactly what the ASUS Transformer Infinity is supposed to be but I would prefer it on a real laptop.
    I care about color accuracy too but I am perfectly fine with needing a desktop monitor and trading brightness there since it is in a stable environment until we hit the technological level that will allow all these elements to be combined. Maybe quantum dot display technology in the future?

    One thing I have to give all these new displays is that they FINALLY have gotten the wide viewing angles thing right and I will be so happy to get this into the rest of the market.
  • seapeople - Tuesday, April 03, 2012 - link

    Would you really prefer a bright 1366x768 TN panel with 200 contrast ratio on a 15" laptop over a less bright IPS Ipad screen with much better resolution, DPI, color accuracy, and viewing angle? Reply
  • vision33r - Wednesday, March 28, 2012 - link

    The screen is really gorgeous when you shoot raw with any DSLR and view it in iPhoto. Reply
  • ol1bit - Wednesday, March 28, 2012 - link

    I just bought a Asus Transformer Prime, and your review was spot on with what I decided. I can not live with IOS and using Android for 3 years.

    Just the simple stuff was my decision:
    1. Freedom of Android, file transfers, etc. No Itunes requirement.
    2. MicroSD
    3. Kewl keyboard
    4. Live Wallpaper.
    5. A real desktop, separate from my applications.
    6. 32GB versus 16GB
    7. Gorilla Glass (yes, true. My original droid lived in my pocket 2 years no scratches, my HTC Rezound scratches the first 2 weeks).
    8. Asus (love their MBs)
    9. Nivida (love their GPUs)

    What I will miss:
    1. Ipad 3 Display.
  • darkcrayon - Thursday, March 29, 2012 - link

    1. iTunes is no longer ever needed for an iOS device. I consider the option of a first party desktop sync solution to be an advantage now that it's not a requirement.
    7. It seems likely the new iPad uses Gorilla Glass or Gorilla Glass 2...
    9. Odd that you'd love nVidia's GPUs when they've been pretty much the bottom of the performance barrel for ARM device graphics, even excluding Apple's SoCs (which have lately been using the fastest GPUs in the industry by far).

Log in

Don't have an account? Sign up now