An interesting talk regarding the IoT aspects of Intel’s Next Generation Atom Core, Goldmont, and the Broxton SoCs for the market offered a good chunk of information regarding the implementations of the Broxton-M platform. Users may remember the Broxton name from the cancelled family of smartphone SoCs several months ago, but the core design and SoC integration is still mightily relevant for IoT and mini-PCs, as well as being at the heart of Intel’s new IoT platform based on Joule which was announced at IDF this week as well.

Broxton in the form that was described to us will follow the previous Braswell platform in the sense that it runs in a quad-core configuration (for most SKUs) consisting of two sets of dual cores sharing a common L3 cache, but will be paired with Intel’s Gen9 graphics. The GPU configuration will be either in a 12 execution unit (EU) or 18 EU format, suggesting that the core will have a single graphics slice and will implement Intel’s binning strategy to determine which EUs are used on each sub-slice.

It was listed that Broxton will support VP9 encode and decode, as well as H.264. HEVC will be decode but they were hazy in clarifying decode support, saying that ‘currently we have a hybrid situation but this will change in the future’.  There will also be OpenCV libraries available for Computer Vision type workloads to take advantage of optimized libraries that focus specifically on the graphics architecture.

It’s worth noting that on this next slide it notes the memory controller supporting DDR3L and LPDDR4, and later on in the presentation it only stated DDR3L. We asked about this, and were told that LPDDR4 was in the initial design specification, but may or may not be in a final product (or only in certain SKUs). However, DDR3L is guaranteed.

It was confirmed that Broxton is to be produced on Intel’s 14nm process, featuring a Gen9 GPU with 4K encode and decode support for HEVC (though not if this is hardware accelerated or hybrid or not). The graphics part will feature an upgraded image processing unit, which will be different to other Gen9 implementations, and we will see the return of extended temperature processors (-40C to 110C) for embedded applications.

One of the big plus points for Broxton will be the support of dual channel ECC memory. This opens up a few more markets where ECC is a fundamental requirement to the operation. The slides also state a 50-80% higher memory bandwidth over Braswell, which is an interesting statement if the platform does not support LPDDR4 (or it’s a statement limited to the specific SKUs where LPDDR4 is supported).

All the displays from the Broxton SoC will support 4K60 outputs on eDP and DP/HDMI, along with more USB ports and support for eMMC 5.0. The support for 4K on HDMI might suggest full HDMI 2.0 support, however it is not clear if this is 4:2:0 chroma support or higher. The Broxton SKUs in this presentation were described as having a 6-12W TDP, with support on a number of Linux flavors, Android, and Windows 10. We asked about Windows 7 support, and we were told that while Broxton will likely support it, given the limited timeframe it is not expected to be promoted as a feature. We asked other questions on frequency, and were told to expect quad-core parts perhaps around 1.8-2.6 GHz. This would be in line with what we expect – a small bump over Braswell. 

We are still waiting on more detailed information regarding Goldmont and Goldmont-based SoCs like Broxton, and will likely have to wait until they enter the market before the full ecosystem of products is announced. 

POST A COMMENT

21 Comments

View All Comments

  • name99 - Thursday, August 18, 2016 - link

    "but the core design and SoC integration is still mightily relevant for IoT and mini-PCs"

    That word, IoT, I do not think it means what you think it means...

    IoT for the most part refers to compute moving seriously down in the stack --- smart scales, BT-enabled thermometers and blood pressure cuffs, room air temperature and quality monitors, etc. These are devices that require three months (at least) of life off one battery, not devices that need quad-core and 18EU GPUs.
    MiniPCs are not the same thing as IoT, not even close.

    Just because Intel PR throws out insane crap every IDF doesn't mean that you have to report it as though it actually makes sense... In the real world this looks like yet another Atom. Meaning a CPU that's relevant to people who actually NEED Windows and irrelevant to everyone else (ie the ACTUAL IoT) who will continue to use ARM just like before, with nothing about this chip making Atom any more compelling.
    Reply
  • FunBunny2 - Thursday, August 18, 2016 - link

    -- These are devices that require three months (at least) of life off one battery, not devices that need quad-core and 18EU GPUs.
    MiniPCs are not the same thing as IoT, not even close.

    a very, very long time ago (~1980) some guy from Intel was quoted, "I'd rather be in every Ford than every PC". there are lots of places, with mains power of course, that can be IoT. more, I'd guess, than on somebody's wrist.
    Reply
  • Murloc - Thursday, August 18, 2016 - link

    you're right that most IoT doesn't need this power and doesn't require windows but your examples are quite limited.
    Cameras, cars, variable speed limit signs, smart meters etc. are IoT as well and don't necessarily have battery issues.
    Reply
  • Daniel Egger - Thursday, August 18, 2016 - link

    > That word, IoT, I do not think it means what you think it means...
    > IoT for the most part refers to compute moving seriously down in the stack --- smart scales, BT-enabled thermometers and blood pressure cuffs, room air temperature and quality monitors, etc.

    Problem is that those devices lack the "I" in IoT so you need gateways translating the lowest power physical layers and protocols from into IP communication so that's probably what Intel envisions where those new CPUs fit in: IoT gateways.
    Reply
  • MrSpadge - Thursday, August 18, 2016 - link

    Most IoT devices require little processing power. That is, most devices we can currently think of. Give people more power in a small and affordable envelop (not judging whether Broxton fulfils this) and they'll unlock more applications which were not possible with the previous, limited hardware. Reply
  • djc208 - Thursday, August 18, 2016 - link

    That's a very narrow view of IoT. A Nest thermostat is an IoT device that has access to power full time. A home security system is another, both could take advantage of this kind of stuff, though it's overkill, but plenty of "things" that could become smarter and are expected to always have ample power.

    The IDF keynote talked about IoT on an industrial scale, and I think that is more the aim of these devices, at least initially. As usual the hard part for Intel is countering the raft of cheap and almost equally capable ARM cores that could be used for similar tasks.
    Reply
  • ToTTenTranz - Thursday, August 18, 2016 - link

    I'm working on an IoT project and writing articles for IoT-dedicated papers. The device in question doesn't need 3 months battery (more like ~200 times less) and more processing power is actually very welcome.

    Your definition of IoT fails in the part where it assumes that the processing power must be very low when compared to multicore ARM and x86 chips, which definitely not true.
    Reply
  • KaarlisK - Thursday, August 18, 2016 - link

    Where can it be seen that LL cache = L3 cache? Reply
  • iwod - Thursday, August 18, 2016 - link

    Any new information / performance Data on Goldmount?
    And if Gen9 GPU the same as one in KabyLake? i.e HEVC Main 10 is actually 100% Hardware Decode rather then a Hybrid approach.
    Reply
  • npz - Friday, August 19, 2016 - link

    It's not since they mentioned: "currently we have a hybrid situation but this will change in the future"

    It's fixed function hardware so whatever changes will have to be to a different revisions of the chip with added functionality, unless they plan on using the GPU shader units for 10-bit decode. But I'm not sure if that is more efficient than using the CPU cores themselves.
    Reply

Log in

Don't have an account? Sign up now