Intel this week confirmed that it had decided to close down its New Devices Group, which developed various wearable electronics, such as smartwatches, health/fitness monitors, smart/AR glasses and so on. The group was created five years ago by then-incoming CEO Bryan Krzanich, who wanted to ensure that Intel’s chips would be inside millions of emerging devices. While wearables have become relatively popular, their propagation is far below that of smartphones. Meanwhile, wearables made by Intel have never been among the market's bestsellers. Thus, the chip giant is pulling the plug.

Over the five-year history of NDG, Intel made two significant acquisitions to bring necessary expertise to the group: the company took over Basis (a maker of fitness watches) in 2014 and Recon (a maker of wearable heads-up displays) in 2015. Most recently, Intel’s NDG showcased their Vaunt (aka Superlight) smart glasses that looked like “normal” glasses, yet used laser beams to project information to retina justifying their “smart” moniker. While NDG had cutting edge technologies, the group has never managed to produce a truly popular product. Moreover, when problems with one of their Basis smart watches showed up on a limited number of devices, Intel preferred to stop their sales and refund their costs to the customers rather than fix the problems and replace faulty units.

In the second half of 2015, Intel folded the New Devices Group into the New Technology Group, which was a signal that the company was hardly satisfied with NGD’s performance. Since then, we have seen multiple reports about layoffs in Intel’s NGD and have hear multiple rumors to axe the unit. Because making actual devices is generally unnatural for Intel, it was a matter of time brefore the chip giant was to pull the plug, so apparently it decided to do so this month.

Since Intel’s New Technology Group remains in place, all of Intel’s ongoing research projects for smart devices remain intact. More importantly, other Intel’s divisions continue to work on their products for wearables and ultra-low-power devices that will become widespread in the looming 5G era. The only products that are not going to see the light of day are those designed by Intel’s New Devices Group (e.g., the Vaunt glasses). Considering the fact that neither of NDG’s products has become popular, it is unclear whether those products are going to be missed.

It is noteworthy that Intel canned their Galileo, Joule, and Edison product lines aimed at the Internet-of-Things last Summer.

Related Reading:

Source: CNBC

POST A COMMENT

55 Comments

View All Comments

  • mode_13h - Sunday, April 22, 2018 - link

    That's what I was thinking. The underlying machine state goes a long way towards enabling CISC, and this is not something you change with just the front end.

    RISC instructions often had a latency of more than one cycle - just that you could usually issue one every cycle. But that's more about achieving the necessary efficiency and less of a defining characteristic. Division is an example of an instruction many CPUs implement, but one that never would've been pipelined much (if at all) on older CPUs. Still, there are substantial benefits to hard-wiring it.
    Reply
  • FunBunny2 - Sunday, April 22, 2018 - link

    " ISAs affects *everything* - the registers, ALUs, flags, control logic, caches, memory model, etc. "

    yes, and cheaper machines in a family would do multiplication as serial add. and so on. and most of those aspects are mediated by the OS, anyway. it was Gates, not Intel, that decided no one needed more than 640K. now, it could be that Intel chose to use the increasing real estate to bring off-chip functions on-chip as a way to lock in clients. it was Grove who said (and wrote the book on it), "only the paranoid survive".
    Reply
  • wumpus - Saturday, April 21, 2018 - link

    About the only way x86 could possibly be assumed "RISC internal" is that they almost certainly split load/store instructions from other instructions. Beyond that, there's very little to RISC.

    Generally speaking, the more true an x86 is to RISC it was, the less well it worked.

    The "most RISC" was K5. That was pretty much a 29000 RISC chip, and even used 29000 assembler to write the microcode. It failed badly.
    The NX5 chip wasn't very RISC (80 bit instructions), but since you *could* code with them (instead of x86) I suspect it qualifies. It only did well enough to be bought by AMD and produce the K6 next.
    Transmeta: the core of the machine didn't execute x86, that was handled with software. About as pure a RISC as x86 could get and failed hard.
    There were tales of x86 PowerPC. If any tried to break out of the lab (where technology goes to die), we don't know about them. Presumably nobody wanted to admit they existed.
    Reply
  • mode_13h - Sunday, April 22, 2018 - link

    > About the only way x86 could possibly be assumed "RISC internal" is that they almost certainly
    > split load/store instructions from other instructions. Beyond that, there's very little to RISC.

    Doesn't sound like you have a source on that. I think the reason we believe it's RISC is that they've previously referred to it as such and we know that the typical case (and the only case until Core 2) is for one x86/x86-64 op to translate into multiple micro ops.

    I'm not aware of any published lists of the micro-op instruction sets in Intel CPUs, but here's some impressive reverse-engineering. You can infer the complexity of the micro-ops by looking at how many are generated by different x86 instructions and to which execution ports they go.

    http://www.agner.org/optimize/instruction_tables.p...

    Looks pretty RISCy to me.
    Reply
  • Wilco1 - Sunday, April 22, 2018 - link

    It's not RISC: the ISA is still CISC. Micro-ops on x86 implementations are very complex so can't possibly qualify as RISC. To give a simple example, the AGU in most implementations support all the complex x86 addressing modes. Complex addressing modes means CISC. Reply
  • Samus - Friday, April 20, 2018 - link

    I think readers definitely get the everyday problems with x86. Just pulling out your pocket compu...smart phone and browsing the web is a dead giveaway how limiting x86 is. An iPhone is a better web browsing experience than many Core i5's. Just look at the sunspider scores. Those are noticeably in everyday use, and no, it has nothing to do with Windows on your x86. It's the architecture running Windows. The long pipeline. The baggage.

    And if you doubt me. Go use a chromebook running a Pentium, then use a similar chromebook running an Exynos. Sure, it's still not a perfect comparison (because the Pentium has a piss poor iGPU) but just browsing the web is smoother.
    Reply
  • HStewart - Friday, April 20, 2018 - link

    The problem is not with CPU or GPU it is with software and OS running on the device. The reason why Windows has more issues with Virus and such is not because the CPU but that it is more popular - only now that Android and iOS is becoming more popular is the because they are getting more users.

    And you can't really blamed Windows either - a lot of is because some people are envy Microsoft and yes Intel's success with it. But there is a lot of poorly written software and some of them actually try to take advantage of situation. For example majority of virus and such come from developers who used variants of Unix.

    Sunspider benchmark is not good example of benchmark - this is JavaScript based benchmark and has many dependencies on like browser and OS. One should use a compiled base benchmark instead an interpreted based benchmark.

    Chromebook is bad example - do you really think Google wants to make x86 system better?
    Reply
  • JoJ - Saturday, April 21, 2018 - link

    I wish I could find better links, but here below is RWTs not really that old note on the subject:
    https://www.realworldtech.com/risc-vs-cisc/

    RISC vs. CISC Still Matters
    February 13, 2000 by Paul DeMone

    And then I cop out and offer you search results from HN, but they do seem all of a high standard.

    The forums at RWT, the site of the first article linked, are excellent on subjects like this, if you're able to find the threads...if you need to narrow things down, on RWT the Itanium saga had its most comprehensive, unflagging and unwaveringly loyal dissection in the Ret forums, and I consider the discussions which took place there can 2000 to have been a education. You might narrow your search by looking for dates of HPE Itanium launches, which prompted debate there.

    https://news.ycombinator.com/item?id=12353489

    https://news.ycombinator.com/item?id=12353489
    Reply
  • mode_13h - Saturday, April 21, 2018 - link

    Don't you have anything better to do with your time than being offended on these forums?

    The problem with x86 is the complexity of the instruction decoder. It's a kludge because the opcode space and instruction format wasn't planned to accommodate all of the various extensions. This means it requires larger, more power-hungry decoders. That's its biggest liability for IoT, where devices need to run on microwatts.
    Reply
  • PeachNCream - Friday, April 20, 2018 - link

    It's not a surprise at all to see NDG go. Wearable devices aren't particularly popular. I've seen a small number of Fitbits that made it a few months on a wrist before disappearing and I know of one person that's bothered with a smart watch, but he's one of those people that has to buy the latest gadget, the most expensive phone, and the nicest new car so in his case, it was no surprise. I'm just glad I'm not his spouse because that guy is going to have a miserable time later in life when he figures out he's burned through everything he's earned without ever putting something away for a rainy day or as a nest egg. Reply

Log in

Don't have an account? Sign up now