Faster Throughput on WCDMA

Fixing unintended attenuation is only one part of what’s new however, the other part of the story is faster cellular connectivity for users on WCDMA/UMTS carriers. Users who are using the 4S on CDMA (like Sprint or Verizon) won’t see a performance difference since this is still the same EVDO Rev.A.

The iPhone 4 used an Intel/Infineon X-Gold 618 which supported HSDPA 7.2 and HSUPA 5.76. The MDM6610 inside the 4S supports HSDPA 14.4 and HSUPA 5.76, alongside a number of 3GPP Rel.7 features which are colloquially known as HSPA+. I talked about this extensively in another piece when there was some confusion about whether or not the 4S is HSPA+ - which it is.

iPhone Cellular Speeds
Property iPhone 3G/3GS iPhone 4 (GSM/UMTS) iPhone 4 (CDMA) iPhone 4S
Baseband Infineon X-Gold 608 Infineon X-Gold 618 Qualcomm MDM6600 Qualcomm MDM6610
HSDPA Cat.8 - 7.2 Mbps Cat.8 - 7.2 Mbps N/A Cat.10 - 14.4 Mbps
HSUPA None - 384 Kbps WCDMA only Cat.6 - 5.76 Mbps N/A Cat.6 - 5.76 Mbps
EVDO N/A N/A 1x/EVDO Rev.A 1x/EVDO Rev.A

The previous X-Gold 618 baseband was a nice improvement over the iPhone 3G/3GS’ X-Gold 608, which lacked HSUPA, but in a world where most WCDMA carriers are at least running HSDPA 14.4, it’s nice to finally have an iPhone with something faster than HSDPA 7.2. I’ve done lots of testing inside my Tucson, AZ market (which is “4G” HSPA+ on AT&T’s coverage viewer) with both the 4 and the 4S, and have built a very good feel for the 4’s performance. As a reminder, if you’re in the USA, those dark blue areas represent HSPA+ coverage areas with AT&T’s upgraded backhaul. In practice these are at least HSDPA 14.4.

 
Left: iPhone 4 Limited to ~6.1 Mbps down, Right: iPhone 4S (same location) hitting ~9 Mbps

With line of sight to an AT&T NodeB inside my HSPA+ market I’m used to seeing a maximum downstream throughput on the iPhone 4 of almost exactly ~6.1 Mbps, which is about right for the 4’s HSDPA 7.2 maximum when you include overhead. The nice straight line in that result should clue you in that downstream throughput on the 4 was being gated by the baseband. On the 4S, in this same location, I’ve been able to get 9.9 Mbps when the cell isn’t loaded at night (I didn't grab a screenshot of that one, for some reason). It’s nice to finally not be gated by the baseband anymore on an iDevice. Having a faster baseband is part of the reason the 4S’s cellular performance is much better, the other half is receive diversity which helps the 4S push these high throughput rates, and also dramatically improve performance at cell edge.

I did some drive testing with the 4 and 4S side by side and targeted areas that I know have pretty poor signal strength. The 4S is shown in yellow, the 4 in blue.

You can see how downstream throughput gets a nice shift up, and the average changes as well, from 2.28 Mbps on the 4 to 2.72 Mbps on the 4S. The maximum in this sample increases from 6.25 to 7.62 Mbps as well. It isn’t a huge shift, but subjectively I’ve noticed the 4S going a lot faster in areas that previously were difficult for the 4.

We’ve also run the usual set of standalone tests on the 4S on AT&T in my market of Tucson, AZ, in Anand’s market of Raleigh, NC, and on Verizon in Raleigh, NC. Though we don’t have a Sprint 4S yet, we hope to do a more serious 4S carrier comparison here in the US when we get one. First up is AT&T which is of course HSPA+ in both of our testing markets.

AT&T HSPA+

Verizon EVDO

iPhone 4S Speedtest Comparison
Carrier AT&T Verizon
  Avg Max Min Avg Max Min
Downstream (Mbps) 3.53 9.94 0.24 0.82 2.05 0.07
Upstream (Mbps) 1.17 1.86 0.009 0.38 0.96 0.003
Latency (ms) 137 784 95 177 1383 104
Total Tests 457 150
Air Interface HSPA+
(HSDPA 14.4/HSUPA 5.76)
EVDO Rev.A

For the CDMA carriers, the 4S shouldn’t (and doesn’t) bring any huge improvement to data throughput because the CDMA 4 had both receive diversity and MDM66x0. For users on GSM/UMTS, however, the 4S does make a difference again thanks to the inclusion of those two new features.

One of the things I noticed was absent on the CDMA iPhone 4 was the 3G toggle. It does indeed make some sense to not include this in a CDMA 1x/EVDO scenario since power draw is about the same between the two air interfaces, however, the absence of this toggle has carried over to the 4S regardless of whether the phone is activated on a CDMA2000 or UMTS/GSM network. That’s right, you can go under Settings -> General -> Network, and there’s no longer any 3G Data toggle which you can disable and fall onto EDGE (2G) with now.

 
Left: iPhone 4S (no 3G toggle), Right: iPhone 4 (3G toggle)

It’s likely that this is absent to accommodate the multi-mode nature of the 4S (and thus the lowest common denominator CDMA mode), however the absence of this toggle makes getting connected in congested areas more difficult. In some markets, (I’m looking at you, AT&T in Las Vegas), EDGE is often the only way to get any connectivity, even without a major convention going on. Not having that 3G toggle makes manually selecting that less-used but more reliable connection impossible now, to say nothing of the potential battery savings that this would afford (and that we sadly can’t test now).

There’s one last tangential question about HSPA+ on the 4S, specifically on AT&T. I’ve left this to the end since it doesn’t impact non-US 4S users, but the last question is whether the 4S is actually on HSPA+. For a while, I was concerned that AT&T would continue using the wap.cingular APN on the 4S which seems shaped to around 7.2 Mbps HSDPA. I’m glad to report that AT&T hasn’t continued using wap.cingular on its 4S data plans, instead using “phone” which is a newer APN that allows for HSPA+ (above 7.2 Mbps) rates. You can check this yourself under PDP Context Info on the 4S in field test.

Improved Baseband - No Deathgrip The A5 Architecture & CPU Performance
Comments Locked

199 Comments

View All Comments

  • metafor - Tuesday, November 1, 2011 - link

    When you say power efficiency, don't you mean perf/W?

    I agree that perf/W varies depending on the workload, exactly as you explained in the article. However, the perf/W is what makes the difference in terms of total energy used.

    It has nothing to do with race-to-sleep.

    That is to say, if CPU B takes longer to go to sleep but it had been better perf/W, it would take less power. In fact, I think this was what you demonstrated with your second example :)

    The total energy consumption is directly related to how power-efficient a CPU is. Whether it's a slow processor that runs for a long time or a fast processor that runs for a short amount of time; whichever one can process more instructions per second vs joules per second wins.

    Or, when you take seconds out of the equations, whichever can process more instructions/joule wins.

    Now, I assume you got this idea from one of Intel's people. The thing their marketing team usually forgets to mention is that when they say race-to-sleep is more power efficient, they're not talking about the processor, they're talking about the *system*.

    Take the example of a high-performance server. The DRAM array and storage can easily make up 40-50% of the total system power consumption.
    Let's then say we had two hypothetical CPU's with different efficiencies. CPU A being faster but less power efficient and CPU B being slower but more power efficient.

    The total power draw of DRAM and the rest of the system remains the same. And on top of that, the DRAM and storage can be shut down once the CPU is done with its processing job but must remain active (DRAM refreshed, storage controllers powered) while the CPU is active.

    In this scenario, even if CPU A draws more power processing the job compared to CPU B, the system with CPU B has to keep the DRAM and storage systems powered for longer. Thus, under the right circumstances, the system containing CPU A actually uses less overall power because it keeps those power-hungry subsystems active for a shorter amount of time.

    However, how well this scenario translates into a smartphone system, I can't say. I suspect not as well.
  • Anand Lal Shimpi - Tuesday, November 1, 2011 - link

    I believe we're talking about the same thing here :)

    The basic premise is that you're able to guarantee similar battery life, even if you double core count and move to a power hungry OoO architecture without a die shrink. If your performance gains allow your CPU/SoC to remain in an ultra low power idle state for longer during those workloads, the theoretically more power hungry architecture can come out equal or ahead in some cases.

    You are also right about platform power consumption as a whole coming into play. Although with the shift from LPDDR1 to LPDDR2, an increase in effective bandwidth and a number of other changes it's difficult to deal with them independently.

    Take care,
    Anand
  • metafor - Tuesday, November 1, 2011 - link

    "If your performance gains allow your CPU/SoC to remain in an ultra low power idle state for longer during those workloads, the theoretically more power hungry architecture can come out equal or ahead in some cases."

    Not exactly :) The OoOE architecture has to perform more tasks per joule. That is, it has to have better perf/W. If it had worse perf/W, it doesn't matter how much longer it remains idle compared to the slower processor. It will still use more net energy.

    It's total platform power that may see savings, despite a less power-efficient and more power-hungry CPU. That's why I suspect that this "race to sleep" situation won't translate to the smartphone system.

    The entire crux relies on the fact that although the CPU itself uses more power per task, it saves power by allowing the rest of the system to go to sleep faster.

    But smartphone subsystems aren't that power hungry, and CPU power consumption generally increases with the *square* of performance. (Generally, this wasn't the case of A8 -> A9 but you can bet it's the case to A9 -> A15).

    If the increase in CPU power per task is greater than the savings of having the rest of the system active for shorter amounts of time, it will still be a net loss in power efficiency.

    Put it another way. A9 may be a general power gain over A8, but don't expect A15 to be so compared to A9, no matter how fast it finishes a task :)
  • doobydoo - Tuesday, November 1, 2011 - link

    You are both correct, and you are also both wrong.

    Metafor is correct because any chip, given a set number of tasks to do over a fixed number of seconds, regardless of how much faster it can perform, will consume more energy than an equally power efficient but slower chip. In other words, being able to go to sleep quicker never means a chip becomes more power efficient than it was before. It actually becomes less.

    This is easily logically provable by splitting the energy into two sections. If 2 chips are both equally power efficient (as in they can both perform the same number of 'tasks' per W), if one is twice as fast, it will consume twice the energy during that time, but complete in half the time, so that element will ALWAYS be equal in both chips. However, the chip which finished sooner will then have to be idle for LONGER because it finished quicker, so the idle expense of energy will always be higher for the faster chip. This assumes, as I said, that the idle power draw of both chips being equal.

    Anand is correct, because if you DO have a more power efficient chip with a higher maximum wattage consumption, race to sleep is the OFTEN (assuming reasonable idle times) the reason it can actually use less power. Consider 2 chips, one which consumes 1.3 W per second (max) and can carry out '2' tasks per second. A second chip consumes 1 W per second (max), and can carry out '1' task per second (so is less power efficient). Now consider a world without race-to-sleep. To carry out '10' tasks over a 10 second period, Chip one would take 5 seconds, but would remain on full power for the full 10 seconds, thereby using 13W. Chip two would take 10 seconds, and would use a total of 10W over that period. Thus, the more power efficient chip actually proved less power efficient.

    Now if we factor in race-to-sleep, the first chip can use 1.3 for the first 5 seconds, then go down to 0.05 for the last 5. Consuming 6.75W. The second chip would still consume the same 10W.

    Conclusion:

    If the chip is not more power effficient, it can never consume less energy, with or without race-to-sleep. If the chip IS more power efficient, but doesn't have the sleep facility, it may not use less energy in all scenarios.

    In other words, for a higher powered chip to reduce energy in ALL situations, it needs to a) be more power efficient fundamentally, and b) it needs to be able to sleep (race-to-sleep).
  • djboxbaba - Monday, October 31, 2011 - link

    Well done on the review Brian and Anand, excellent job as always. I was resisting the urge to tweet you about the eta of the review, and of course I end up doing it the same day as your release the review :).
  • Mitch89 - Monday, October 31, 2011 - link

    "This same confidence continues with the 4S, which is in practice completely usable without a case, unlike the GSM/UMTS iPhone 4. "

    Everytime I read something like this, I can't help but compare it to my experience with iPhone 4 reception, which was never a problem. I'm on a very good network here in Australia (Telstra), and never did I have any issues with reception when using the phone naked. Calls in lifts? No problem. Way outside the suburbs and cities? Signal all the way.

    I never found the iPhone 4 to be any worse than other phones when I used it on a crappy network either.

    Worth noting, battery life is noticeably better on a strong network too...
  • wonderfield - Tuesday, November 1, 2011 - link

    Same here. It's certainly possible to "death grip" the GSM iPhone 4 to the point where it's rendered unusable, but this certainly isn't the typical use case. For Brian to make the (sideways) claim that the 4 is unusable without a case is fairly disingenuous. Certainly handedness has an impact here, but considering 70-90% of the world is right-handed, it's safe to assume that 70-90% of the world's population will have few to no issues with the iPhone 4, given it's being used in an area with ample wireless coverage.
  • doobydoo - Tuesday, November 1, 2011 - link

    I agree with both of these. I am in a major capital city which may make a difference, but no amount or technique of gripping my iPhone 4 ever caused dropped calls or stopped it working.

    Very much an over-stated issue in the press, I think
  • ados_cz - Tuesday, November 1, 2011 - link

    It was not over-stated at all and the argument that most people are right handed does not hold a ground. I live in a small town in Scotland and my usual signal strength is like 2-3 bars. If browsing on net on 3G without case and holding the iPhone 4 naturaly with left hand (using the right hand for touch commands ) I loose signal completely.
  • doobydoo - Tuesday, November 1, 2011 - link

    Well the majority of people don't lose signal.

    I have hundreds of friends who have iPhone 4's who've never had any issue with signal loss at all.

    The point is you DON'T have to be 'right handed' for them to work, I have left handed friends who also have no issues.

    You're the exception, rather than the rule - which is why the issue was overstated.

    For what it's worth, I don't believe you anyway.

Log in

Don't have an account? Sign up now