LTE Network Tech Explained

If you look at the evolution of wireless networks from generation to generation, one of the clear generational delimiters that sticks out is how multiplexing schemes have changed. Multiplexing of course defines how multiple users share a slice of spectrum, probably the most important core function of a cellular network. Early 2G networks divided access into time slots (TDMA—Time Division Multiple Access); GSM is the most notable for being entirely TDMA.

3G saw the march onwards to CDMA (Code Division Multiple Access) where each user transmits on the entire 5 or 1.25MHz, but encodes data atop the spectrum with a unique psueodorandom code. The receiver end also has this pseudorandom code, decodes the signal with it, and all other signals look like noise. Decode the signal with each user’s pseudorandom code, and you can share the slice of spectum with many users. As an aside, Qualcomm initially faced strong criticism and disagreement from GSM proponents when CDMA was first proposed because of how it seems to violate physics. Well, here we are with both 3GPP and 3GPP2 using CDMA in 3G tech.

Regardless, virtually all the real 4G options move to yet another multiplexing scheme called OFDMA (Orthogonal Frequency Division Multiple Access). LTE, WiMAX, and now-defunct UMB (the 3GPP2 4G offering) all use OFDMA on the downlink (or forward link). That’s not to say it’s something super new; 802.11a/g/n use OFDM right now. What OFDMA offers over the other multiplexing schemes is slightly higher spectral efficiency, but more importantly a much easier way to use larger slices of spectrum and different size slices of spectrum—from the 5MHz in WCDMA or 1.5MHz in CDMA2000, to 10, 15, and 20MHz channels.

We could spend a lot of time talking about OFDMA alone, but essentially what you need to know is that OFDMA makes using larger channels much easier from an RF perspective. Engineering similarly large channel size CDMA hardware is much more difficult.

In traditional FDMA, carriers are spaced apart with large enough guard intervals to guarantee no inter-carrier interference occurs, and then band-pass filtered. In OFDM, the subcarriers are generated so that inter-carrier interference doesn’t happen—that’s done by picking a symbol duration and dividing it an integer number of times to create the subcarrier frequencies, and spacing adjacent subcarriers so the number of cycles differ by just one. This relationship guarantees that the overlapping sidebands from other sub-carriers are nulls at every other subcarrier. This results in the interference-free OFDM symbol we’re after, and efficient packing of subcarriers. What makes OFDMA awesome is that at the end of the day, all of this can be generated using an IFFT.

If that’s a confusing mess, just take away that OFDMA enables very dense packing of subcarriers that data can then be modulated on top of. Each client in the network talks on a specific set of OFDM subcarriers, which are shared among all users on the channel through some pre-arranged hopping pattern. This is opposed to the CDMA schema where users encode data across the entire slice of spectrum.

The advantages that OFDMA brings are numerous. If part of the channel suddenly fades or is subject to interference, subcarriers on either side are unaffected and can carry on. User equipment can opportunistically move between subcarriers depending on which have better local propagation characteristics. Even better, each subcarrier can be modulated appropriately for faster performance close to the cell center, and greater link quality at cell edge. That said, there are disadvantages as well—subcarriers need to remain orthogonal at all times or the link will fail due to inter-carrier-interference. If frequency offsets aren’t carefully preserved, subcarriers will no longer be orthogonal and cause interference.

Again, the real differentiator between evolutionary 3G and true 4G can be boiled down to whether the air interface uses OFDMA as its multiplexing scheme, and thus support beefy 10 or 20MHz channels—LTE, WiMAX, and UMB all use it. Upstream on LTE uses SC-FDMA which can be thought of as a sort of precoded OFDMA. One area where WiMAX is technically superior to LTE is OFDMA on the uplink, where it in theory offers faster throughput.

There are other important differentiators like MIMO and 64QAM support. HSPA+ also adds optional MIMO (spatial multiplexing) and 64QAM modulation support, but even the fastest HSPA+ incantation should be differentiated somehow.

Again, OFDMA doesn’t implicitly equal better spectral efficiency. In fact, with the same higher order modulations, channel size, and MIMO support, they’re relatively similar. The difference is that OFDMA in LTE enables variable channel sizes and much larger ones. This table from Qualcomm says as much:

Keep in mind, the LTE device category here is category 4.
Launch LTE devices with MDM9600 are category 3. 

LTE heavily leverages MIMO for spatial multiplexing on the downlink, and three different modulation schemes—QPSK, 16QAM, and 64QAM. There are a number of different device categories supported with different maximum bitrates. The differences are outlined in the table on the following page, but essentially all the Verizon launch LTE devices are category 2 or 3 per Verizon specifications. Differences between device categories boil down to the multi-antenna scheme supported and internal buffer size. Again, the table shown corresponds to 20MHz channels—Verizon uses 10MHz channels right now.

One of the limitations of WCDMA UMTS was its requirement of 5MHz channels for operation. LTE mitigates this by allowing a number of different channel sizes—1.4, 3, 5, 10, 15, and 20MHz channel sizes are all supported. In addition, all equipment supports both time division duplexing (TDD) and frequency division duplexing (FDD) for uplink and downlink. Verizon right now has licenses to the 700MHz Upper C-Band (13) in the US, which is 22MHz of FDD paired spectrum. That works out to 10MHz slices for upstream and downstream with an odd 1MHz on each side whose purppse I’m not entirely certain of.

All of what I’ve described so far is part of LTE’s new air interface—EUTRAN (evolved UMTS Terrestrial Radio Access Network). The other half of the picture is the evolved packet core (ePC). The combination of these two form LTE’s evolved packet system. There’s a lot of e-for-evolved prefixes floating around inside LTE, and a host of changes.

Introduction to Cellular Network Evolution More about LTE and Implementation Details


View All Comments

  • Brian Klug - Thursday, April 28, 2011 - link

    I mentioned that with LTE sometimes the handovers pause the data context while the handover happens. It's an occasional 50-500ms pause, sometimes a second. Honestly I noticed it more on the data cards than I did the thunderbolt or the Samsung hotspot.

    That's another thing which will improve with time.

  • iwod - Wednesday, April 27, 2011 - link

    I dont think Bandwidth was much of a concern for mature 3G market. Even 1Mbps is good ( enough ) for web surfing. The problem is latency. And it is very high for 3G network,sometimes up to 1sec.

    LTE was suppose bring round trip performance down to double digit ms range. But my skip through of this article sees no test on Latency.

    Another growing concern for me, is that Data and Mobile Network just dont seems to work. You have a finite amount of total bandwidth, but people consume data far greater then anyone would expect. I think someday we have to deploy national wide Micro WiFi + LTE station to help with bandwidth. Especially in populated city. ( I cant even imagine how would it even work out in place like Hong Kong and China )
  • Brian Klug - Thursday, April 28, 2011 - link

    We tested latency on Page 10 if you're interested. Both latency as measured by (which isn't perfect) and by using pingplotter for almost 12 hours to a number of targets.

    It's sub 100 ms for a lot of things, and I showed gaming at 50ms to a local CS:S server. It's a definite improvement again thanks to much faster signaling and a shorter frame time.

  • DanNeely - Thursday, April 28, 2011 - link

    Unless I'm misunderstanding what the graph is showing, ATT's lower C block ownership is fragmentary with no coverage at all in large parts of the country.
  • DanNeely - Thursday, April 28, 2011 - link

    nevermind, I misunderstood what you were saying.... Reply
  • bman212121 - Sunday, May 1, 2011 - link

    I've seen another report from someone using LTE in New Orleans showing similar numbers. Anything sub 100ms should be fine for an fps. I've definitely seen worse under normal circumstances. FWIW using a D2 and comparing the ping times from the phones terminal to a pc using 3G hotspot, the wireless added 16ms latency. Reply
  • bman212121 - Sunday, May 1, 2011 - link

    I have to wonder if they didn't include USB tethering simply because they couldn't sustain the power needed. If you were having issues with a 700ma charger than the maximum 500ma from a computer's usb port could be problematic. It is interesting though that the other devices worked, so I'm guessing that the wifi is what is really eating battery life. Reply
  • tjk818 - Wednesday, July 27, 2011 - link

    I have the Pantech UML 290 and a cradlepoint router all updated with the latest firmware (4glte and 3g)works great on 3G now converting to 4g LTE using a ZADACOM feed cut for verizon746-806mhz and a grid antenna( Hyperlink ) . Without the grid I get 1 bar constant sometimes gong to 2 bars with the GRID I get nothing,
    Does the cable in the Pantech modem need to be connected or disconnected for it work on the grid , I live about 3 miles from the tower . also is there a setting that i can use in the VZAM menu ( under the DIAGVZW menu) that I can set the modem 4g port to activate the external antenna port and deactivate the internal antenna ? I’m using a specan I can see the carriers from the tower at 783mhz.

    feed back is welcome
  • milan03 - Monday, August 22, 2011 - link

    Hey Brian: you've mentioned that current Verizon LTE devices are category 3 meaning they can only achieve up to 50mbps with 2x10Mhz. Are you sure that's the breakdown because I'm seeing 50+mbps on a daily basis here in NYC and when downloading sustained well seeded torrent I'm seeing around 6MB/s which makes no sense. I am convinced that Thunderbolt is capable of 73mbps with all the overhead up to about 60mbps. Am I wrong? I dod have poor upload speeds which explains Thunderbolt being 2x1 MIMO not 2x2 like other devices, but is there any other LTE handset that's 2x2 MIMO?

    Here is what I'm seeing these days: [IMG][/IMG]
  • oz973 - Tuesday, January 17, 2012 - link

    How long does it take for this to charge to 100%? And how can you tell? Reply

Log in

Don't have an account? Sign up now