MSM8960 Cellular Connectivity

Until now, to get 4G LTE connectivity in a smartphone has required using two basebands - one for delivering 4G LTE connectivity, and a more traditional smartphone-geared baseband for voice on 2G and 3G data. Take Verizon’s 4G LTE smartphone lineup for example, where many devices combine MSM8655 for camping a 1x voice session alongside MDM9600 for EVDO and LTE, or some other similar combination. Further, all those LTE basebands are built on 45nm process and really geared towards data specific applications.

For a while now we’ve also been talking about 28nm LTE basebands, and specifically the multimode connectivity on MSM8960. This is the first of Qualcomm’s S4 SoCs, and includes 4G LTE connectivity alongside the usual assortment of WCDMA/GSM/CDMA2000 standards. MSM8960’s cellular baseband is based around Qualcomm’s second generation (3GPP Rel.9) LTE modem, which is exactly what’s inside MDM9x15 which we’ve talked about in the past.

The full laundry list of what air interfaces MDM8960 supports is impressive - LTE FDD/TDD, UMTS, CDMA, TD-SCDMA (for Chinese markets), and GERAN (GSM/EDGE). I’ve made a small table below which gives the full laundry list.

Snapdragon S4 - MSM8960 Cellular Support
LTE FDD 100 Mbps DL / 50 Mbps UL (Cat. 3, 3GPP Rel.9)
LTE TDD 68 Mbps DL / 17 Mbps UL (Cat. 3, 3GPP Rel.9)
UMTS DC-HSPA+ 42 Mbps DL (Cat. 24) / 11 Mbps UL (Cat. 8)
CDMA2000 1xAdvanced, EVDO Rev.B (14.7 Mbps DL / 5.4 Mbps UL)
GSM GSM/GPRS/EDGE
TD-SCDMA TD-SCDMA 4.2 Mbps DL / 2.2 Mbps UL

What’s new again is inclusion of a category 3, 4G LTE baseband into the SoC alongside DC-HSPA+ and TD-SCDMA for the Chinese market. This is a substantial increase in the number of air interfaces supported onboard the SoC which will enable tighter integration and lower power from the baseband being manufactured on that same 28nm process. There’s still the requirement for external RF and transceiver (using RTR8600 or something similar) which houses all the analog, but that’s the same everywhere else.

Since the baseband in MSM8960 is shared with MDM9x15, the two are both 3GPP Release 9 devices, whereas presently MDM9600 and other launch LTE devices are 3GPP Release 8, which was the launch standard. This newer 3GPP release brings a number of improvements, and closer to transitioning to Voice over LTE (VoLTE) and SRVCC (single radio voice call continuity) for fallback to GSM/UMTS or 1x voice in the circumstance that 4G LTE coverage fades. The present combination of a camped 1x voice session alongside 4G LTE for data is also possible in MDM8960, which is exactly what’s done in the case of the HTC Thunderbolt.

In time, carriers will transition to using VoLTE and enrich the voice experience by offering services that work across the data session, alongside some circuit switched (CS) traditional 2G/3G voice to fall back to. For CDMA networks that’ll continue being the dual RF scenario which uses 1x for voice, and for UMTS networks that’ll be a SRVCC augmented fast handover to 3G for voice calls. This handover and call setup is targeted to take place in under one second.

There’s more to the connectivity situation as well, as MSM8960 includes built in WLAN 802.11b/g/n (single spatial stream), Bluetooth, and GPS. These are integrated directly into the MSM8960 the same way the cellular modem is and only require some external RF to use.

Of course, it’s one thing to talk about all this connectivity on MSM8960 and something else entirely to see it. With MSM8660, Qualcomm gave us one of their Mobile Development Platforms (MDPs) which is something of a reference design and development board for each SoC generation.

This time was no exception, and they showed off their new MSM8960 MDP connected to Verizon’s 4G LTE network streaming 1080p YouTube video, loading pages, and finally running a few speedtests using the Speedtest.net application.

This was all over Verizon’s 4G LTE network at Qualcomm HQ in San Diego and worked impressively well for hardware and software that still isn’t production level. In spite of marginal signal in the room we performed testing in, the MDP finished tests with pretty decent results. I ran some more tests on a Droid Bionic in the same room and saw similar results.

Final Words

Qualcomm has had MSM8960 silicon back in house for the past 3 months and is on-track for a release sometime in the first half of next year. Assuming Qualcomm can deliver on its claims, performance alone would be enough to sell this chip. Improved power characteristics and integrated LTE baseband really complete the package though.

The implications for a 1H 2012 MSM8960 release are tremendous. Android users will have to choose between a newer software platform (OMAP 4 running Ice Cream Sandwich) or much faster hardware (MSM8960). Windows Phone users may finally get a much needed performance boost if Microsoft chooses to standardize on Krait for its Windows Phone hardware refresh next year. End users will benefit as next year's smartphones and tablets will see, once again, a generational performance improvement over what's shipping today. LTE should also start to see much more widespread adoption (at the high end) as a result of Qualcomm's integrated LTE baseband.

The Adreno 225 GPU
POST A COMMENT

107 Comments

View All Comments

  • metafor - Friday, October 07, 2011 - link

    I believe the comparison was simple: dual-Krait compared to 4xA9. I claimed Krait would be much closer to A15 level than A9 -- I was right.

    I claimed that 2xA15 (and 2xKrait) will be far better than 4xA9. I hold to that but some may disagree. I can understand that point.

    I claimed that both Krait and A15 were set to target similar frequencies (~2.5GHz) according to release -- I was right.

    I claimed that Krait will initially be ~1.4-1.7GHz on 28LP and is planned to reach 2.5GHz on HKM -- I was right.

    On every point, you disagreed with me -- and stated "I know for a fact that such and such". Did Krait turn out to be "a modified A9" as you claimed? No.

    Is its projected performance and clockspeeds far closer to A15-class than A9? Yes.

    Also, how often do you think that quad-core on your desktop actually gets utilized? Are you under the impression that multithreading is some kind of magical pixie dust that you sprinkle on to an OS kernel and all of a sudden, your applications will run faster?

    Hint: Android is fully multithread capable -- 3.0 even includes a great pthread library implementation. That doesn't mean individual applications can actually be threaded or that they even can be. This should be common knowledge by now: only certain workloads are highly parallelizable.
    Reply
  • FunBunny2 - Saturday, October 08, 2011 - link

    -- This should be common knowledge by now: only certain workloads are highly parallelizable.

    Too many folks have never heard of Amdahl or his law.
    Reply
  • metafor - Friday, October 07, 2011 - link

    On top of that -- as we've discussed previously -- there is a very small subset of computationally intensive, highly thread-scalable applications out there. Specifically: compression, video transcoding and image processing (which will likely be the biggest performance-demanding app for the CPU on tablets what with the Photoshop Touch series).

    So yes, on 4xA9, that could potentially scale to all 4 cores. But here's the thing: those are all very NEON/FPU intensive applications.

    And guess what subsystem was substantially improved in A15 compared to A9?

    Double the data path width, unified load-store, fully out-of-order VFP + NEON and lower integer execution latency on top of that (which, IIRC, is what most image processing algorithms use).

    Even assuming A15 runs at the same clockspeed as an A9, it would still be 2-3x faster in typical arithmetic-intensive workloads.
    Reply
  • partylikeits1999 - Saturday, October 08, 2011 - link

    Anybody who thinks that application performance can be predicted by simply by CPU clock speeds alone, is a fool who has no business posting on sites like this. Let it go. Reply
  • baritz - Friday, October 07, 2011 - link

    In the Power vs. Temperature plot on page two, have the axis labels been reversed accidentally?

    The way I read the graph as it is, 40nm transistors can handle more power without getting hot, while 28nm transistors get hot very quickly with only a small increase in power.
    Reply
  • metafor - Friday, October 07, 2011 - link

    It seems pretty clear. As temperature increases (right on the X axis), 40G transistors consume more power (up in the Y axis). The power increase vs temperature increase curve of 28LP doesn't grow as fast.

    This, of course, has more to do with it being an LP process. 40LP transistors would have a similar curve.
    Reply
  • Haserath - Saturday, October 08, 2011 - link

    Metafor is right about the curve having to do with the process. His explanation kinda makes it seem like a temp increase causes the power increase though. It's the power increase that causes the temp increase, and "G" transistors are designed to handle more power without wasted heat(temperature increase) compared to "LP" transistors. There's also a second reason why 28nm is hotter than 40nm.

    If you have a certain amount of heat energy being produced at a certain power level, the 40nm transistors will be a certain temperature.

    Now take that same amount of heat energy being produced, and shrink the transistors to half their size. This increases their temperature within the same power envelope.

    Of course they labeled a thermal limit on the power side, because the holder of whatever phone this chip goes into is going to feel the heat coming from the chip due to how much power it's using(how much heat energy is put out), not just due to the temperature of the transistors
    Reply
  • metafor - Saturday, October 08, 2011 - link

    It's actually both :)

    This is a problem in a lot of circuit design. Power dissipation (both due to scattering and increase in resistance of the charge channel) increases with temperature. But temperature also increases as more power is dissipated. It's a positive feedback loop that just gets hotter and hotter.

    When simulating a circuit, this problem has to be taken into account but simulating the heat dissipation is difficult so one can never be sure that a circuit wouldn't overheat under its own operation.

    It's an on-going research area in academics of how to simulate such a situation beforehand and avoid it.
    Reply
  • Haserath - Sunday, October 09, 2011 - link

    Well, that is true.

    Basically, it's increasing the power of the chip, which increases heat energy output, that increases the temperature. And with that increase in temperature, comes an increase in power.

    Heat dissipation is the only way for the chip to keep itself from burning up. It's just impossible to really tell how much can be dissipated under even certain conditions due to heat exchange kinetically between atoms, and most likely the radiation amount differs between atoms.

    It's basically impossible to simulate an exact scenario for this exchange.
    Reply
  • jjj - Friday, October 07, 2011 - link

    The minute a company gives you a bit of attention,you forget about objectivity.

    "The key is this: other than TI's OMAP 5 in the second half of 2012 and Qualcomm's Krait, no one else has announced plans to release a new microarchitecture in the near term"
    "Qualcomm remains the only active player in the smartphone/tablet space that uses its architecture license to put out custom designs."

    Both statements are false,and you know that very well.
    Reply

Log in

Don't have an account? Sign up now