Mobile and Tablet Editor, Brandon Chester

This year’s CES was my second time attending the show. I was quite new to AnandTech during my first CES, and what I can say is that while this year seemed even bigger than last as far as the scale of the show goes, for me personally it felt much less frantic than last year. CES is an interesting event for me because there aren’t always many major announcements made which relate to new mobile products, but because of how prolific the smartphone has become most of the announcements tie in with the mobile world in some fashion.

One of the big pushes this year was VR. This isn’t unexpected, given that there are now many vendors trying to get involved with VR now that it appears that the category will be much bigger than the niche market some may have expected it to be in the early days of the Oculus Rift.

At CES, I was able to try the HTC Vive and the Gear VR. I wasn’t able to try the Oculus Rift, but everything I’ve heard from other people and read online says that the Vive is the best of these three major offerings. I have to admit that I was quite impressed by the demos for the Vive, and you can read a bit more about that here.

One of the barriers to adoption that I can see with the Vive will be the fact that setting up the lighthouse tracking stations requires a substantial amount of space, and I don’t think many users are going to have a room that they can dedicate to using for VR. In most cases, I expect that VR will end up being a way to increase immersion without requiring the user to move about any more than existing games require. Unfortunately, this won’t take full advantage of VR’s potential, but that’s just the truth of how most people want (or are able) to play games if we consider it a mass market venture.

As a mobile editor, the most interesting thing about VR is how much it owes to the advances in the smartphone market. The only reason any of these products exist in their current forms is because of the work to create small displays with higher and higher resolutions, along with the great advancements that have been made in the manufacturing of OLED displays. I don’t think the current VR headsets are where they need to be in this regard, as in my experience you need to move away from PenTile in order to avoid the chromatic aliasing and other artifacting that the current headsets have. The displays will eventually have to enter the realm of 4K and even 8K panels as well in order to have a high enough resolution to mimic how you would actually see the world. However, the fact that there are such high resolution OLED panels available right now is definitely the result of how that technology has advanced in order to be used in smartphones, and the display technology wouldn’t be where it needs to be for these first generation VR headsets if that had never occurred.

On the tablet side of things, I saw two notable tablet launches at the show this year. The first was Huawei’s MediaPad M2 10, which is a mid range Android tablet using the Kirin 930. More interesting was the launch of Samsung’s TabPro S. This announcement was a surprise to me, and even more surprising was the fact that Samsung had decided to go the Windows route with their 2-in-1 instead of using Android. Samsung may have realized that moving to a 2-in-1 can amplify the issues with Android tablets, and using Windows allows you to provide an experience that leans more toward a laptop, which is what I feel many 2-in-1 buyers are really looking for anyway.

Something absent from this year’s show were smartwatches. Part of this certainly has to do with how much of the market Apple has grabbed, coupled with the fact that they don’t attend the show. Even so, I was surprised to see very little promotion from other vendors and nothing in the way of new announcements. We did see new finishes for the Huawei Watch and Samsung Gear S2, but no completely new hardware. We might see more at MWC in February perhaps.

While smartwatches were missing in action, that isn’t to say that wearables themselves were missing from the show. Fitness trackers were shown off in multiple places, and head-mounted displays in a similar style as Google Glass were being shown off as well. Zeiss’s Smart Optics technology, for making discreet smart glasses, was definitely the most interesting thing going on in that category, despite the fact that it’s only a tech demo right now. I hope that they’re already in talks with companies to get this technology into future commercial products.

The last area that I ended up seeing a lot of at the show was television. In hindsight this is a bit surprising since I don’t even have a television or any sort of cable service, but I suppose that my interest in display technology played a part. There were two main things that happened in the TV space. The first is the adoption of wider color gamuts and support for HDR in the standards for UltraHD content.

Both of these are important, although I am very disappointed by the efforts of a group of companies to push DCI-P3 support into these specifications in addition to Rec. 2020 because their display technology isn’t capable of reproducing that color space entirely. I was able to see quantum dot panels this year that covered over 90% of the Rec. 2020 gamut, and the use of a second smaller gamut may cause problems down the line with input and output chains that don’t handle the color management for P3 content properly when displays move to Rec. 2020 displays. Consumers with newer Rec. 2020 displays might end up seeing oversaturated pictures, or owners of DCI-P3 panels may have to deal with under saturation of Rec. 2020 content.

The second thing I noticed about the TV space is the lack of advancement that OLED has seen. This is mainly due to Samsung’s choice to push LCD panels with quantum dots, which takes the largest OLED manufacturer out of the race. While they briefly took a step into the OLED TV market a few years ago, Samsung has just continued primarily as an LCD manufacturer since then.

There are a couple of important things to consider here. I haven’t yet seen an OLED panel approaching coverage of the Rec. 2020 gamut, which is part of the reason why DCI-P3 has been put into the UltraHD standards. This is conflicting becuase it’s a gamut for cinema use, and it now coexists with the Rec. 2020 gamut which will be used for UHDTV. OLED’s limited peak brightness also limits the range of bright shades for HDR content, but its black level allows for better detail with dark areas. Something to note is the fact that light incident upon the display will end up negating the advantages of OLED’s black levels due to reflections, so the black levels only provide an advantage with a proper environment to block out other light sources.

With these things in mind, it does make sense that Samsung is pushing in a different direction. When looking at the TV market, I don’t see OLED as becoming a complete successor to LCD, while I do expect it to do so in the mobile space. TVs often have static parts of the interface, and issues like burn in and emitter aging will be difficult to control. Improvement to emitter materials will allow for higher peak brightness and a greater color gamut, but it seems like OLED may be more of a stopgap technology rather than a long term play.

When looking strictly at the mobile market I don’t think there was a lot to be excited about from this year’s CES. Those sorts of announcements are usually reserved for MWC anyway, so it’s something to be expected. If you expand your view to the technology market as a whole, then I think you’ll see a lot of interesting things going on. I think VR is going to be big, even though the display technology isn’t where it needs to be yet. Early adopters will help drive further investment, which will drive technology improvements, and eventually prices will come down as well. I think head-mounted displays in general will also become more widely adopted in the future as technologies are created to implement them in more discreet ways, and I think Zeiss’s demo was a great example of how quickly things can move.

As for the display and TV market, I think the move to Rec. 2020 will be delayed as manufacturers ship DCI-P3 panels instead, and that’s quite unfortunate. HDR has the potential to greatly improve the dynamic range of video content, and it’ll be interesting to see which of the several proposals for HDR content encoding end up being adopted most widely. As for panel technology, I think LCD is going to stick around for longer than people think, and I think OLED will probably be something that exists in addition to LCDs rather than replacing them, with a future technology such as MicroLED eventually replacing both down the line. As always, technology keeps moving forward in many different ways.

CES 2016 Roundup (2): Senior Laptop Editor, Brett Howse CES 2016 Roundup (4): SSD Editor, Billy Tallis
Comments Locked

44 Comments

View All Comments

  • JonnyDough - Wednesday, January 27, 2016 - link

    "With these things in mind, it does make sense that Samsung is pushing in a different direction. When looking at the TV market, I don’t see OLED as becoming a complete successor to LCD, while I do expect it to do so in the mobile space. TVs often have static parts of the interface, and issues like burn in and emitter aging will be difficult to control."

    Wouldn't that be opposite? Phones and tablets are often used in uncontrolled environments, and have lock screens and apps that create static impressions on a display as much as any tv in my opinion. I think OLEDs could definitely penetrate the television market, and I think as a result of either they will trickle over into other markets due to cost. Unless a truly viable alternative to OLEDs can overtake these spaces, I think that continual refinements in OLED help it prove to be a constantly used and somewhat static technology. Robots are moving more and more towards organics as well - so it would make sense that in the future we borrow more and more from nature as we come to understand it.
  • Brandon Chester - Wednesday, January 27, 2016 - link

    Relative to TVs you keep your phone screen on for a comparatively short period of time. Aging is actually less of an issue in the mobile market. Aging is the bigger issue with TV adoption, with burn in being a secondary thing which could become a larger problem with the adoption of TV boxes that get left on with a very static UI.
  • JonnyDough - Thursday, January 28, 2016 - link

    You brought up some good points. I wonder though how many people have a phablet and watch Netflix or HBO now when on the road in a hotel bed.
  • Kristian Vättö - Thursday, January 28, 2016 - link

    I would say the even bigger factor is the fact that TV upgrade cycles are much longer than smartphones. While the average smartphone upgrade cycle is now 2-2.5 years, most people keep their TVs for much longer than that, and expect them to function properly.
  • Mangemongen - Tuesday, February 2, 2016 - link

    I'm writing this on my 2008, possibly 2010 Panasonic plasma TV which shows static images for hours every day, and I have seen no permanent burn in. There is merely some slight temporary burn in. Is OLED worse than modern plasmas?
  • JonnyDough - Wednesday, January 27, 2016 - link

    What we need are monitors that have a built in GPU slot, since AMD is already helping them to enable other technologies, why not that? Swappable GPUs on a monitor, the monitors already have a PSU built in so why not? Put a more powerful swappable PSU with the monitor, a mobile like GPU, and voila. External plug and play graphics.
  • Klug4Pres - Wednesday, January 27, 2016 - link

    "The quality of laptops released at CES were clearly a step ahead of what they have been in the past. In the past quality was secondary to quantity, but with the drop in volume, everyone has had to step up their game."

    I don't really agree with this. Yes, we have seen some better screens at the premium end, but still in the sub-optimal 16:9 aspect ratio, a format that arrived in laptops mainly just to shave a few bucks off cost.

    Everywhere we are seeing quality control issues, poor driver quality, woeful thermal dissipation, a pointless pusuit of ever thinner designs at the expense of keyboard quality, battery life, speaker volume etc., a move to unmaintanable soldered CPUs and RAM.

    Prices are low, quality is low, volumes are getting lower. Of course, technology advances in some areas have led to improvements, e.g. Intel's focus on idle power consumption that culminated in Haswell battery-life gains.
  • rabidpeach - Wednesday, January 27, 2016 - link

    yea? 16k per eye? is that real, or him make up numbers to make radeon have something to shoot for in future?
  • boeush - Wednesday, January 27, 2016 - link

    There are ~6 million cones (color photoreceptors) per human eye. Each cone perceives only the R, G, or B portion (roughly speaking), making for roughly 2 megapixels per eye. Well, there's much lower resolution in R, so let's say 4 megapixels to be generous.

    That means 4k, spanning the visual field, already exceeds human specs by a factor of 2, at first blush. Going from 4k to 16k boosts pixel count by a factor of 16, we end up exceeding human photoreceptor count by a factor of 32!

    But there's a catch. First, human acuity exceeds the limit of color vision, because we have 20x more rods (monochromatic receptors) than cones, which provide very fine edge and texture information over which the color data from the cones is kind of smeared or interpolated by the brain. Secondly, most photoreceptors are clustered around the fovea, giving very high angular resolution over a small portion of the visual field - but we are able to rapidly move our eyeballs around (saccades), integrating and interpolating the data to stitch and synthesuze together a more detailed view than would be expected from a static analysis of the optics.

    In light of all of which, perhaps 16k uniformly covering the entire visual field isn't such overkill after all if the goal is the absolute maximum possible visual fidelity.

    Of course, running 16k for each eye at 90+ Hz (never even mind higher framerates) would take a hell of a lot of hardware and power, even by 2020 standards. Not to mention, absolute best visual fidelity would require more detailed geometry, and more accurate physics of light, up to full-blown real-time ray-tracing with detailed materials, caustics, global illumination, and many bounces per ray - something that would require a genuine supercomputer to pull off at the necessary framerates, even given today's state of the art.

    So ultimately, its all about diminishing returns, low-hanging fruit, good-enough designs, and balancing costs against benefits. In light of which, probably 16k VR is impractical for the foreseeable future (meaning, the next couple of decades)... Personally, I'd just be happy with a 4k virtual screen, spanning let's say 80% of my visual field, and kept static in real space via accelerometer-based head-tracking (to address motion sickness) with an option to intentionally reposition it when desired - then I wouldn't need any monitors any longer, and would be able to carry my high-res screen with/on me everywhere I go...
  • BMNify - Wednesday, January 27, 2016 - link

    "AMD's Raja Koduri stating that true VR requires 16K per eye at 240 Hz."

    well according to the bbc r&d scientific investigations found the optimal being close to the 300 fps we were recommending back in 2008 and prove higher frame rates dramatically reduce motion blur which can be particularly disturbing on large modern displays.

    it seems optimal to just use the official UHD2 (8k) spec with multi surround sound and the higher real frame rates of 100/150/200/250 fps for high action content as per the existing bbc/nhk papers... no real need to define UHD3 (16k) for near eye/direct retina display

Log in

Don't have an account? Sign up now