Apple A5X Die Size Measured: 162.94mm^2, Samsung 45nm LP Confirmed
by Anand Lal Shimpi on March 16, 2012 1:59 PM ESTContrary to what we thought yesterday based on visual estimation of the A5X die, Chipworks has (presumably) measured the actual die itself: 162.94mm^2. While the A5 was big, this is absolutely huge for a mobile SoC. The table below puts it in perspective.
CPU Specification Comparison | ||||||||
CPU | Manufacturing Process | Cores | Transistor Count | Die Size | ||||
Apple A5X | 45nm? | 2 | ? | 163mm2 | ||||
Apple A5 | 45nm | 2 | ? | 122mm2 | ||||
Intel Sandy Bridge 4C | 32nm | 4 | 995M | 216mm2 | ||||
Intel Sandy Bridge 2C (GT1) | 32nm | 2 | 504M | 131mm2 | ||||
Intel Sandy Bridge 2C (GT2) | 32nm | 2 | 624M | 149mm2 | ||||
NVIDIA Tegra 3 | 40nm | 4+1 | ? | ~80mm2 | ||||
NVIDIA Tegra 2 | 40nm | 2 | ? | 49mm2 |
The PowerVR SGX 543MP2 in Apple's A5 takes up just under 30% of the SoC's 122mm^2 die size, or around 36.6mm^2 just for the GPU. Double the number of GPU cores as Apple did with the A5X and you're looking at a final die size of around 160mm^2, which is exactly what Chipworks came up with in their measurement.
Update: Chipworks confirmed the A5X is still built on Samsung's 45nm LP process. You can see a cross-section of the silicon above. According to Chipworks' analysis, the A5X features 9 metal layers.
Note that this is around 2x the size of NVIDIA's Tegra 3. It's no surprise Apple's GPU is faster, it's spending a lot more money than NVIDIA to deliver that performance. From what I hear, NVIDIA's Wayne SoC will finally show what the GPU company is made of. The only issue is that when Wayne shows up, a Rogue based A6 is fairly likely. The mobile GPU wars are going to get very exciting in 2013.
Image Courtesy iFixit
Thanks to @anexanhume for the tip!
45 Comments
View All Comments
tipoo - Friday, March 16, 2012 - link
There are already reviews out and they\re all saying pretty much the same battery life as the 2.UpSpin - Saturday, March 17, 2012 - link
You might be right that it was plan B, and that they had to make some unwanted decisions.But let's talk about power consumption:
The new SoC is similar to the old one, except of a second GPU. Power consumption of the chip might be 30% more. RAM does consume some power, too, but not that much, or does RAM get noticeable hot?
But the display is the deal breaker. Just take a look at the Engadget post about the iPad screen under the microscope:
http://www.engadget.com/photos/the-new-ipads-lcd-u...
It's pretty obvious that horizontally no added black gap was introduced by the switch to the higher density. But vertically about twice as much black area was added! Additionally does each LC cell consume power if turned off, now they have to control 4 times the amount of cells, so the panel without backlight will consume at least 4 times more power (whereas a panel doesn't consume that much at all compared to the LED backlight, still an increase). So if you increase the pixel density you will get worse transmittance, thus you have to increase the backlight brightness. With a single row of LEDs they couldn't operate the LEDs in their most efficient region, so they had to add a second row to increase brightness.
Other way to think about it: The new battery is 20Whr larger but it has the same battery life. So if you think it's because of the RAM and SoC, both together have to consume an additional 2 Watt. That alone is ridiculous.
It's wrong to say it's the SoC only, it's totally wrong to say it's because of the added RAM, it's also wrong to say it's because of the display only, but it's mainly because of the display.
Higher backlight brightness maybe twice as bright, faster GPU necessary, more RAM necessary, all because of the higher resolution.
And placing the RAM at the side or over the chip doesn't really change the power consumption, it's just a space saving, thus cost saving.
tipoo - Friday, March 16, 2012 - link
With Rogue, these mobile SoC GPUs are getting into and maybe beyond the 200Gflop range (they said 20x the per-core performance of the 543) as the PS360 GPUs. The current MP4 is about 30 I think. Do you think the limitation will be elsewhere for actual real world graphics performance though? Last I checked these chips still didn't have the memory bandwidth of graphics cards even from 2005, and then there's processor performance and how large apps/games can be, not to mention controls. With so much potential in Rogue and future SoC chips I hope the other problems are looked at too.thefrick - Friday, March 16, 2012 - link
A 45nm A5X is a deal-killer for me. The "iPad 3" is essentially an underpowered version of the iPad 2 considering the display's high resolution and lack of CPU/GPU clock increases. The next iPad will benefit from a full node shrink to (presumably) 28nm on BOTH the CPU and the 4G baseband; likely in addition to new CPU (Cortex A15) and GPU architectures. The iPad 3 is shaping up to be a repeat of the iPhone 3G (read: only survives one iOS update before becoming slow enough to impair its usefulness).This is in addition to the battery problems the iPad 3 is likely to experience: that 45nm A5X is BIG for a mobile SoC, and will be generating a lot of heat. Hot iPad innards = significantly diminished Li-Ion battery lifetime...
labrats5 - Friday, March 16, 2012 - link
I'm not sure about your iPhone 3G comparison. Apple's iOS updates seems to be more RAM dependent than anything else. A good example would be that iPhoto runs on the iPhone 4 (512mb) but not the original iPad (256mb) even though the latter's SOC is faster. The new iPad version RAM doubled while the iPhone 3G wasn't.dagamer34 - Friday, March 16, 2012 - link
Except Apple's already given their quoted battery life times and there's no change... this is an example of taking "speeds and feeds" so far, you are about to fall off a cliff.gorash - Saturday, March 17, 2012 - link
It's basically iPad 2 with a better screen. iPad 2S.KoolAidMan1 - Saturday, March 17, 2012 - link
I am very curious to see practical benchmarks. It is possible that the GPU upgrade increased performance for things other than rendering video. Remember that Core Image, Core Video, and other components of iOS/OS X are GPU accelerated. Applications actually feel a little bit snappier than they do in the iPad 2.It is a minor difference but it is there. Again, looking forward to Anandtech's review.
jjj - Friday, March 16, 2012 - link
So it's as big as Ivy Bridge,that's a more interesting comparison.Mobile GPU war -Apple can't be part of such a war,so for such a war to exist we would need Android phone makers that have their own SoC to go for huge die sizes,forcing Nvidia,Qualcomm and everybody else to do the same but that would push phone prices up so maybe it would be better to have no such wars before 20/22nm.There is also the matter of heat and a huge GPU could force lower CPU clocks (like it might just be doing right now in Apple's case).
For traditional PCs,consoles and TV's,obviously, a large GPU could work even before 20/22nm since $ and heat budget are less of a problem but there isn't much of a point to go that way unless you got the sales volume and the software.
If anything, i would much rather see 2-3x faster storeage in phones and tablets.
A5 - Friday, March 16, 2012 - link
I'd much rather have a faster GPU than faster storage. The internal storage on phones is more than fast enough for now. Faster MicroSD cards would be nice, though.