Improved Baseband - No Deathgrip

We’ve mentioned already that the 4S is more of an evolution of the CDMA iPhone 4 than the GSM/UMTS iPhone 4, and where that’s most obvious is again is its cellular architecture. When we reviewed the GSM/UMTS iPhone 4, we investigated and explained why its design made it especially prone to unintended attenuation when held a certain way. Apple remedied the situation somewhat by giving away free cases, but only in the case of the CDMA iPhone 4 was the fundamental design issue remedied by adding receive diversity with a second cellular antenna at the top.

The 4S builds on this architecture by incorporating the same diverse receive path with two U shaped antennas at top and bottom, and adding transmit diversity to the equation. I’m being a bit fast and loose with the term “diversity” here, since the 4S cannot transmit simultaneously on both antennas, but rather switches between the two antennas based on which has the least receive fading. Receive diversity is a bit more interesting to explain, since the 4S appears to be able to dynamically decide whether to select one of the two receive paths (in the case that one is much more faded than the other), or combine the two (using something like maximal-ratio combining) and get a better quality equivalent signal.

When I talked about the CDMA 4’s receive diversity scheme, I went into detail to prove that this feature was present. With the 4S, these diversity features are much easier to talk about because Apple has been directly forthcoming about the feature this time. In addition, the transmit switching part of the equation is discussed at length in Apple’s FCC disclosure (since the FCC is interested in scrutinizing transmitters), and even includes a nice diagram.

The device is capable of switching between the Primary/ANT1 and Secondary/ANT2 Antennas. The antenna switching is implemented with a physical, "break-before-make" switch such that only one antenna can be used for cellular transmission at a time.

To re-state the evolution that Apple has taken with this design, the GSM/UMTS 4 had one transmit and receive chain, the CDMA 4 added a second receive chain for diversity, and the 4S now has two receive chains and the ability to switch between two transmit chains. The result is that the phone no longer is prone to fading on either the forward or reverse link due to being held near the feed points for one chain.

To be totally thorough, you can immediately see that the 4S still incorporates the same overall design however, complete with a a U.FL connector at the bottom and a feed cable that runs up the side of the PCB to the top. From here it obviously terminates somewhere that makes contact with the top antenna.


Encircled in red: Top antenna feed cable (Original photo courtesy iFixit)

When we saw that the CDMA iPhone used Qualcomm’s MDM6600, we made an educated guess that the next device would likely be based on the same baseband. It turned out that this was pretty close to the case, as the iPhone 4S uses an unannounced Qualcomm MDM6610.


Encircled in blue: Qualcomm MDM6610 and PM8028, Red: Bottom antenna U.FL connector. Note to the right the two test pads for the two chains. (Original photo courtesy iFixit)

There are some immediate differences that stick out about MDM6610 versus MDM6600. First, 6610 doesn’t appear to include an internal transceiver, unlike MDM6600 which previously included a transceiver in its package. MDM6610’s package is correspondingly smaller, at 10x10 mm, compared to 6600’s 9.8x12.2 mm, but again you have to factor in the fact that the transceiver isn’t inside that package anymore. Based on those facts and the state of 28nm maturity, it’s almost certain that MDM6610 is still a 45nm part. The 4S uses Qualcomm’s RTR8605 IntelliCeiver transceiver for cellular RF management and also for GLONASS/GPS. We’ve seen RTR8600 before, and RTR8605 appears to just be an improvement upon that part with the same functionality. In fact, inside the MDM6610’s AMSS it’s just referred to as RTR8600.

Qualcomm isn’t ready to talk at all about MDM6610, but I’ve done a little digging and know more about this part. First off, it’s referred to as codename “Trek” inside the Qualcomm AMSS (Advanced Mobile Subscriber Software), whereas MDM6600 in the CDMA iPhone was referred to as “Phoenix.” When we first saw that the 4S was using a baseband that hasn’t been announced, we assumed it was an Apple specific one-off. However, after some digging I uncovered a reference to a GSM/UMTS only part as well: “MDM6210MDM6610”. For those that aren’t familiar, the 2 connotes GSM/UMTS compatibility, 6 adds 1x/EVDO. Apple appears to have secured MDM6610 exclusivity for the time being, but I expect we’ll see a more formal announcement of the MDM6x10 family down the road. This is pretty much par for the course for Apple when it comes to component sourcing.

iPhone 4S - Network Support
GSM/EDGE Support 850 / 900 / 1800 / 1900 MHz
UMTS/HSDPA/HSUPA Support 850 / 900 / 1900 / 2100 MHz
HSDPA/HSUPA Speeds 14.4 Mbps / 5.76 Mbps
CDMA2000 1x/EVDO Support EVDO Rev.A (800 / 1900 MHz)
Baseband Hardware Qualcomm MDM6610

MDM6610 also appears to inherit all the air interfaces that MDM6600 had, including WCDMA/HSPA+ with HSDPA 14.4, HSUPA 5.76, GSM, CDMA 1x/EVDO (up to Rev.B, though Apple notes only A is used at present), and finally GPS/GLONASS. Superficially, there don’t appear to be any major differences beyond using an external transceiver to accommodate diversity and transmit switching. In addition, the 4S contains all the same bands as the GSM/UMTS iPhone 4.

I was initially worried that Apple would go the same route that it did with both iPhone 4 launches and initially not include FieldTest. Thankfully, the 4S includes FieldTest right out of launch, which works for both CDMA and GSM/UMTS provisioned 4Ses.

 
iPhone 4S Field Test under CDMA Mode

There’s a corresponding difference in the interface for the two modes. On CDMA iPhone 4S mode, there are tabs for 1x and EVDO, and Rx signal strength fields for both receive paths, as well as the current transmit power. What’s interesting here is that when using the 4S in CDMA mode, you can see the receive strength change for each antenna when held.

 
iPhone 4S Field Test under UMTS/GSM Mode

On the GSM/UMTS provisioned iPhone 4S, you don’t get individual receive strength for each chain, instead just the effective RSCP from either the selected chain or combined. If you grip the phone while watching this, you can sometimes watch RSCP race to –130 dBm and then switch entirely to the other antenna and race back up to an acceptable number. As a reminder, to get into FieldTest dial *3001#12345#* - you can then keep numerics instead of bars in the top left by force quitting FieldTest after launching it (hold down power/lock until power off appears, then hold the home button).

So how does the 4S do in our actual attenuation testing? Very well actually. I watched the RSCP of one NodeB and held the 4S all the same ways we’ve done since the 4, recorded a bunch of data, and averaged it together. What we’re measuring here is reported in dBm, and less attenuation is better (0 dBm = 1 mW).

Signal Attenuation Comparison in dB - Lower is Better
  Cupping Tightly Holding Naturally Holding in Case On an Open Palm
iPhone 4S 10.8 8.8 - 4.3
CDMA iPhone 4 16.5 15.5 9.0 7.9
GSM/UMTS iPhone 4 24.6 19.8 7.2 9.2
Samsung Galaxy S 2 18.4 5.9 - 12.2
Droid 3 16.0 11.3 - 5.0
HTC Sensation 15.0 10.0 8.0 0.0
Samsung Droid Charge 10.0 10.0 5.0 0.0
HTC Thunderbolt - LTE 5.3 2.5 - 4.4
HTC THunderbolt - EVDO 6.5 0.8 - 7.2
LG Optimus 2X 13.7 9.3 - 5.9
Nexus S 13.3 6.1 - 4.3
Droid 2 11.5 5.1 - 4.5
BlackBerry Torch 15.9 7.1 - 3.7
Dell Streak 14.0 8.7 - 4.0
Droid X 15.0 5.1 - 4.5
iPhone 3GS 14.3 1.9 3.2 0.2
HTC Nexus One 17.7 10.7 7.7 6.7

When we reviewed the CDMA iPhone 4, I made mention of the fact that this was the first iPhone 4 I felt comfortable using without a case. This same confidence continues with the 4S, which is in practice completely usable without a case, unlike the GSM/UMTS iPhone 4. Since launch time, Anand and myself have been using our 4Ses without cases, and the difference is dramatic.

Introduction Faster Throughput on WCDMA
Comments Locked

199 Comments

View All Comments

  • metafor - Tuesday, November 1, 2011 - link

    When you say power efficiency, don't you mean perf/W?

    I agree that perf/W varies depending on the workload, exactly as you explained in the article. However, the perf/W is what makes the difference in terms of total energy used.

    It has nothing to do with race-to-sleep.

    That is to say, if CPU B takes longer to go to sleep but it had been better perf/W, it would take less power. In fact, I think this was what you demonstrated with your second example :)

    The total energy consumption is directly related to how power-efficient a CPU is. Whether it's a slow processor that runs for a long time or a fast processor that runs for a short amount of time; whichever one can process more instructions per second vs joules per second wins.

    Or, when you take seconds out of the equations, whichever can process more instructions/joule wins.

    Now, I assume you got this idea from one of Intel's people. The thing their marketing team usually forgets to mention is that when they say race-to-sleep is more power efficient, they're not talking about the processor, they're talking about the *system*.

    Take the example of a high-performance server. The DRAM array and storage can easily make up 40-50% of the total system power consumption.
    Let's then say we had two hypothetical CPU's with different efficiencies. CPU A being faster but less power efficient and CPU B being slower but more power efficient.

    The total power draw of DRAM and the rest of the system remains the same. And on top of that, the DRAM and storage can be shut down once the CPU is done with its processing job but must remain active (DRAM refreshed, storage controllers powered) while the CPU is active.

    In this scenario, even if CPU A draws more power processing the job compared to CPU B, the system with CPU B has to keep the DRAM and storage systems powered for longer. Thus, under the right circumstances, the system containing CPU A actually uses less overall power because it keeps those power-hungry subsystems active for a shorter amount of time.

    However, how well this scenario translates into a smartphone system, I can't say. I suspect not as well.
  • Anand Lal Shimpi - Tuesday, November 1, 2011 - link

    I believe we're talking about the same thing here :)

    The basic premise is that you're able to guarantee similar battery life, even if you double core count and move to a power hungry OoO architecture without a die shrink. If your performance gains allow your CPU/SoC to remain in an ultra low power idle state for longer during those workloads, the theoretically more power hungry architecture can come out equal or ahead in some cases.

    You are also right about platform power consumption as a whole coming into play. Although with the shift from LPDDR1 to LPDDR2, an increase in effective bandwidth and a number of other changes it's difficult to deal with them independently.

    Take care,
    Anand
  • metafor - Tuesday, November 1, 2011 - link

    "If your performance gains allow your CPU/SoC to remain in an ultra low power idle state for longer during those workloads, the theoretically more power hungry architecture can come out equal or ahead in some cases."

    Not exactly :) The OoOE architecture has to perform more tasks per joule. That is, it has to have better perf/W. If it had worse perf/W, it doesn't matter how much longer it remains idle compared to the slower processor. It will still use more net energy.

    It's total platform power that may see savings, despite a less power-efficient and more power-hungry CPU. That's why I suspect that this "race to sleep" situation won't translate to the smartphone system.

    The entire crux relies on the fact that although the CPU itself uses more power per task, it saves power by allowing the rest of the system to go to sleep faster.

    But smartphone subsystems aren't that power hungry, and CPU power consumption generally increases with the *square* of performance. (Generally, this wasn't the case of A8 -> A9 but you can bet it's the case to A9 -> A15).

    If the increase in CPU power per task is greater than the savings of having the rest of the system active for shorter amounts of time, it will still be a net loss in power efficiency.

    Put it another way. A9 may be a general power gain over A8, but don't expect A15 to be so compared to A9, no matter how fast it finishes a task :)
  • doobydoo - Tuesday, November 1, 2011 - link

    You are both correct, and you are also both wrong.

    Metafor is correct because any chip, given a set number of tasks to do over a fixed number of seconds, regardless of how much faster it can perform, will consume more energy than an equally power efficient but slower chip. In other words, being able to go to sleep quicker never means a chip becomes more power efficient than it was before. It actually becomes less.

    This is easily logically provable by splitting the energy into two sections. If 2 chips are both equally power efficient (as in they can both perform the same number of 'tasks' per W), if one is twice as fast, it will consume twice the energy during that time, but complete in half the time, so that element will ALWAYS be equal in both chips. However, the chip which finished sooner will then have to be idle for LONGER because it finished quicker, so the idle expense of energy will always be higher for the faster chip. This assumes, as I said, that the idle power draw of both chips being equal.

    Anand is correct, because if you DO have a more power efficient chip with a higher maximum wattage consumption, race to sleep is the OFTEN (assuming reasonable idle times) the reason it can actually use less power. Consider 2 chips, one which consumes 1.3 W per second (max) and can carry out '2' tasks per second. A second chip consumes 1 W per second (max), and can carry out '1' task per second (so is less power efficient). Now consider a world without race-to-sleep. To carry out '10' tasks over a 10 second period, Chip one would take 5 seconds, but would remain on full power for the full 10 seconds, thereby using 13W. Chip two would take 10 seconds, and would use a total of 10W over that period. Thus, the more power efficient chip actually proved less power efficient.

    Now if we factor in race-to-sleep, the first chip can use 1.3 for the first 5 seconds, then go down to 0.05 for the last 5. Consuming 6.75W. The second chip would still consume the same 10W.

    Conclusion:

    If the chip is not more power effficient, it can never consume less energy, with or without race-to-sleep. If the chip IS more power efficient, but doesn't have the sleep facility, it may not use less energy in all scenarios.

    In other words, for a higher powered chip to reduce energy in ALL situations, it needs to a) be more power efficient fundamentally, and b) it needs to be able to sleep (race-to-sleep).
  • djboxbaba - Monday, October 31, 2011 - link

    Well done on the review Brian and Anand, excellent job as always. I was resisting the urge to tweet you about the eta of the review, and of course I end up doing it the same day as your release the review :).
  • Mitch89 - Monday, October 31, 2011 - link

    "This same confidence continues with the 4S, which is in practice completely usable without a case, unlike the GSM/UMTS iPhone 4. "

    Everytime I read something like this, I can't help but compare it to my experience with iPhone 4 reception, which was never a problem. I'm on a very good network here in Australia (Telstra), and never did I have any issues with reception when using the phone naked. Calls in lifts? No problem. Way outside the suburbs and cities? Signal all the way.

    I never found the iPhone 4 to be any worse than other phones when I used it on a crappy network either.

    Worth noting, battery life is noticeably better on a strong network too...
  • wonderfield - Tuesday, November 1, 2011 - link

    Same here. It's certainly possible to "death grip" the GSM iPhone 4 to the point where it's rendered unusable, but this certainly isn't the typical use case. For Brian to make the (sideways) claim that the 4 is unusable without a case is fairly disingenuous. Certainly handedness has an impact here, but considering 70-90% of the world is right-handed, it's safe to assume that 70-90% of the world's population will have few to no issues with the iPhone 4, given it's being used in an area with ample wireless coverage.
  • doobydoo - Tuesday, November 1, 2011 - link

    I agree with both of these. I am in a major capital city which may make a difference, but no amount or technique of gripping my iPhone 4 ever caused dropped calls or stopped it working.

    Very much an over-stated issue in the press, I think
  • ados_cz - Tuesday, November 1, 2011 - link

    It was not over-stated at all and the argument that most people are right handed does not hold a ground. I live in a small town in Scotland and my usual signal strength is like 2-3 bars. If browsing on net on 3G without case and holding the iPhone 4 naturaly with left hand (using the right hand for touch commands ) I loose signal completely.
  • doobydoo - Tuesday, November 1, 2011 - link

    Well the majority of people don't lose signal.

    I have hundreds of friends who have iPhone 4's who've never had any issue with signal loss at all.

    The point is you DON'T have to be 'right handed' for them to work, I have left handed friends who also have no issues.

    You're the exception, rather than the rule - which is why the issue was overstated.

    For what it's worth, I don't believe you anyway.

Log in

Don't have an account? Sign up now