SSD Editor, Billy Tallis

With only a few months as an AnandTech Editor under my belt, this was my first CES and my first time in Las Vegas. The scale of the event is almost incomprehensible (I frequently relied on my smartphone's GPS to navigate even when indoors), and my schedule was packed with meeting after meeting and ten different companies in one day can get hectic. But it was worth it to meet all the company contacts I'd only been introduced to by email, meet most of my fellow AT writers, and get hands on with upcoming SSD technology. 

While I would have loved to take the time to look at all the drones flying around (in cages, fortunately) or wait in line to try some of the VR demos, I only had about two hours to explore the CES show floor. That wasn't even enough time to walk past half of the exhibits. I stopped at a few booths to drool over FLIR's thermal cameras while imagining how a PCIe M.2 SSD might light up under their gaze, to gawk at wireless routers competing to have the most antennas, and to look for cameras worth upgrading to, but what I have to report on is just about SSDs.

For months, the SSD market has largely been in a holding pattern awaiting next generation components to make into products that are actually exciting. The high-end SATA market hasn't budged; the low-end market has seen gradual price decreases from transitions to TLC NAND, cheaper controllers and Toshiba's 15nm flash finally replacing their 19nm flash in volume. The PCIe SSD market still suffers from too few choices, such as expensive drives from Intel and Samsung, and a handful of outdated drives that don't support PCIe 3.0 or NVMe and aren't any cheaper for it. The doldrums will be over soon, as 3D NAND and NVMe are about to become widely available from every brand.

We've covered the announcements and roadmap updates from the SSD controller vendors, but haven't highlighted many of the retail products that will be incorporating them. Some of the products demoed were fairly unsurprising, such as Plextor's M7V value SATA drive and successor to the M6V (switching from MLC with the SM2246EN controller to TLC with a Marvell controller); they get to cut costs and have some more room to differentiate their product using in-house custom firmware. The OCZ Trion 150 improves on the Trion 100 by moving from Toshiba's A19nm TLC to their 15nm TLC, with no changes in performance specifications. Aside from cosmetic differences that aren't necessarily finalized, most of the new Phison-based products don't stand out from the crowd and there's nothing much to say about them individually. 

The most unusual drive was clearly Mushkin's prototype for a 4TB model in their Reactor line. In order to hit that capacity they're putting two SM2246EN controllers behind a JMicron JMS562. The latter chip is one you'd more commonly expect to find in a multi-bay USB hard drive enclosure, but it can use one of its three SATA channels as the host interface instead of USB, making it into a transparent RAID controller. This reportedly kills random access performance, but Mushkin is expecting to be able to ship the 4TB model for a mere $500, which will greatly help it find a niche.

Plextor's M8Pe NVMe drive using Marvell's 88SS1093 controller will be available as an add-in card or in M.2 2280 form factor. They had a mock-up of a wraparound heatspreader for the M.2 model with a similar motif to the add-in card's heatsink. This is the first M.2 drive we've seen with any sort of heatsink or heatspreader on it, which may become more important as performance increases

ADATA's exhibit impressed me with the sheer breadth of their product line. Between their consumer, enterprise and industrial SSDs they were showing off drives based on virtually every controller except Phison's. Their IM2P3738N industrial M.2 drive is using Marvell's low-cost 88NV1140 PCIe 3.0 x1 NVMe controller, the first deliberately low-end NVMe product. ADATA crammed all the other new stuff into one demo system: a 2.5" drive with IMFT 3D NAND, an M.2 prototype with Silicon Motion's SM2260 NVMe controller, and a U.2 drive with Marvell's 88SS1093 NVMe controller. The latter drive was in a PCIe to 2.5" U.2 riser card that looked like it would be a handy addition to my testbed. We've asked for a couple in order to do power testing!

CES 2016 Roundup (3): Mobile and Tablet Editor, Brandon Chester CES 2016 Roundup (5): News Editor, Anton Shilov
Comments Locked

44 Comments

View All Comments

  • JonnyDough - Wednesday, January 27, 2016 - link

    "With these things in mind, it does make sense that Samsung is pushing in a different direction. When looking at the TV market, I don’t see OLED as becoming a complete successor to LCD, while I do expect it to do so in the mobile space. TVs often have static parts of the interface, and issues like burn in and emitter aging will be difficult to control."

    Wouldn't that be opposite? Phones and tablets are often used in uncontrolled environments, and have lock screens and apps that create static impressions on a display as much as any tv in my opinion. I think OLEDs could definitely penetrate the television market, and I think as a result of either they will trickle over into other markets due to cost. Unless a truly viable alternative to OLEDs can overtake these spaces, I think that continual refinements in OLED help it prove to be a constantly used and somewhat static technology. Robots are moving more and more towards organics as well - so it would make sense that in the future we borrow more and more from nature as we come to understand it.
  • Brandon Chester - Wednesday, January 27, 2016 - link

    Relative to TVs you keep your phone screen on for a comparatively short period of time. Aging is actually less of an issue in the mobile market. Aging is the bigger issue with TV adoption, with burn in being a secondary thing which could become a larger problem with the adoption of TV boxes that get left on with a very static UI.
  • JonnyDough - Thursday, January 28, 2016 - link

    You brought up some good points. I wonder though how many people have a phablet and watch Netflix or HBO now when on the road in a hotel bed.
  • Kristian Vättö - Thursday, January 28, 2016 - link

    I would say the even bigger factor is the fact that TV upgrade cycles are much longer than smartphones. While the average smartphone upgrade cycle is now 2-2.5 years, most people keep their TVs for much longer than that, and expect them to function properly.
  • Mangemongen - Tuesday, February 2, 2016 - link

    I'm writing this on my 2008, possibly 2010 Panasonic plasma TV which shows static images for hours every day, and I have seen no permanent burn in. There is merely some slight temporary burn in. Is OLED worse than modern plasmas?
  • JonnyDough - Wednesday, January 27, 2016 - link

    What we need are monitors that have a built in GPU slot, since AMD is already helping them to enable other technologies, why not that? Swappable GPUs on a monitor, the monitors already have a PSU built in so why not? Put a more powerful swappable PSU with the monitor, a mobile like GPU, and voila. External plug and play graphics.
  • Klug4Pres - Wednesday, January 27, 2016 - link

    "The quality of laptops released at CES were clearly a step ahead of what they have been in the past. In the past quality was secondary to quantity, but with the drop in volume, everyone has had to step up their game."

    I don't really agree with this. Yes, we have seen some better screens at the premium end, but still in the sub-optimal 16:9 aspect ratio, a format that arrived in laptops mainly just to shave a few bucks off cost.

    Everywhere we are seeing quality control issues, poor driver quality, woeful thermal dissipation, a pointless pusuit of ever thinner designs at the expense of keyboard quality, battery life, speaker volume etc., a move to unmaintanable soldered CPUs and RAM.

    Prices are low, quality is low, volumes are getting lower. Of course, technology advances in some areas have led to improvements, e.g. Intel's focus on idle power consumption that culminated in Haswell battery-life gains.
  • rabidpeach - Wednesday, January 27, 2016 - link

    yea? 16k per eye? is that real, or him make up numbers to make radeon have something to shoot for in future?
  • boeush - Wednesday, January 27, 2016 - link

    There are ~6 million cones (color photoreceptors) per human eye. Each cone perceives only the R, G, or B portion (roughly speaking), making for roughly 2 megapixels per eye. Well, there's much lower resolution in R, so let's say 4 megapixels to be generous.

    That means 4k, spanning the visual field, already exceeds human specs by a factor of 2, at first blush. Going from 4k to 16k boosts pixel count by a factor of 16, we end up exceeding human photoreceptor count by a factor of 32!

    But there's a catch. First, human acuity exceeds the limit of color vision, because we have 20x more rods (monochromatic receptors) than cones, which provide very fine edge and texture information over which the color data from the cones is kind of smeared or interpolated by the brain. Secondly, most photoreceptors are clustered around the fovea, giving very high angular resolution over a small portion of the visual field - but we are able to rapidly move our eyeballs around (saccades), integrating and interpolating the data to stitch and synthesuze together a more detailed view than would be expected from a static analysis of the optics.

    In light of all of which, perhaps 16k uniformly covering the entire visual field isn't such overkill after all if the goal is the absolute maximum possible visual fidelity.

    Of course, running 16k for each eye at 90+ Hz (never even mind higher framerates) would take a hell of a lot of hardware and power, even by 2020 standards. Not to mention, absolute best visual fidelity would require more detailed geometry, and more accurate physics of light, up to full-blown real-time ray-tracing with detailed materials, caustics, global illumination, and many bounces per ray - something that would require a genuine supercomputer to pull off at the necessary framerates, even given today's state of the art.

    So ultimately, its all about diminishing returns, low-hanging fruit, good-enough designs, and balancing costs against benefits. In light of which, probably 16k VR is impractical for the foreseeable future (meaning, the next couple of decades)... Personally, I'd just be happy with a 4k virtual screen, spanning let's say 80% of my visual field, and kept static in real space via accelerometer-based head-tracking (to address motion sickness) with an option to intentionally reposition it when desired - then I wouldn't need any monitors any longer, and would be able to carry my high-res screen with/on me everywhere I go...
  • BMNify - Wednesday, January 27, 2016 - link

    "AMD's Raja Koduri stating that true VR requires 16K per eye at 240 Hz."

    well according to the bbc r&d scientific investigations found the optimal being close to the 300 fps we were recommending back in 2008 and prove higher frame rates dramatically reduce motion blur which can be particularly disturbing on large modern displays.

    it seems optimal to just use the official UHD2 (8k) spec with multi surround sound and the higher real frame rates of 100/150/200/250 fps for high action content as per the existing bbc/nhk papers... no real need to define UHD3 (16k) for near eye/direct retina display

Log in

Don't have an account? Sign up now