Editor’s Opinion: A Culture of Information

As an aside to today's announcement, I had a few thoughts on how Intel releases product information. Seeing as this ventures closer to opinion/editorial than news & analysis, I felt it best not to mix it up with the key facts of the Whiskey/Amber Lake announcements. Nonetheless, I wanted to share my thoughts to give everyone a bit more insight into how information sharing has been changing over the past few years.

For readers that regularly follow us, you will note that with each and every generation, Intel has been less than forthcoming with details about new launches. In some aspects, such as the enterprise team, that trend is slowly reversing, but for this launch, almost all the technical info came in two slides, and for most of the specifications we had to request follow-up questions. The data we used to get in a slide deck in previous years has now been relegated to ‘ask if you care about it’, which is a worrying policy from my point of view.
For example, if you are wondering where information on the integrated graphics is, well, we’re waiting on it because it wasn’t provided in the group briefing. Info such as the name of the integrated graphics (UHD xxx), number of execution units, base frequencies, turbo frequencies – all of which used to be standard fare in previous generations. As did the per-core turbo frequencies. We also ask for new information these days as our understanding of products increases, such as PL2 data.
Perhaps the best example of how Intel has changed is that Intel didn't even disclose information on the underlying microarchitecture or manufacturing node until it was asked. Information that used to be at the forefront of a presentation has been replaced with marketing, and said information is now left at the end.
This isn’t a direct attack on Intel - we are constantly engaging with the people we speak to at the company on the way that they disclose materials like this, encouraging them to be more forthcoming on day one, as the company used to be. The differences between notebook, desktop, and enterprise disclosure are down to the different product teams deciding individually what to disclose, rather than a common disclosure set running through the whole company.
Intel’s reaction to this, from the people we speak to, has always been one of co-operation. They have been honest when they are told they can’t disclose information, even if we ask every time because the information is arguably trivial to obtain elsewhere (we would rather Intel was the source, given that it is Intel’s product). The way Intel is going about the marketing message for these new platforms is similar to that of how Intel marks new generations of products: people aren’t interested in names, or specific features. This is why we now have multiple manufacturing nodes and microarchitectures all under the ‘8th Generation’ branding. Products are sold on capabilities and user experiences, not in the fine minutiae of technical specifications – and this I do not doubt.
However Intel has historically been a company that has delved deep into details consistently over the years, and that seems to be fading – for a company that takes pride in its engineering, it would be great to offer engineering details to the customers and analysts that track its progress.

Intel Launches Whiskey and Amber


View All Comments

  • bernstein - Tuesday, August 28, 2018 - link

    yeah, 2019 is shaping up to be really interesting, with amd's zen 2 on tsmc 7nm node vs intel 10nm ice lake vs. apple's a12 (& other 7nm arm cores)

    probably the first time in two decades that intel hasn't had a significant node advantage.

    it might just be possible to find intel sandwiched in a zen2/a12 sandwich, where both deliver better performance, one in servers/desktops, the other in ultrabooks/tablets
  • ImSpartacus - Tuesday, August 28, 2018 - link

    Ice Lake certainly has IPC improvements, but Intel decided against backporting it to 14nm, so, yes, the 10nm delay functionally delays IPC improvements. Reply
  • repoman27 - Tuesday, August 28, 2018 - link

    In order to increase IPC, you need to actually update the microarchitecture. All of the Intel Lakes are based on the Skylake µarch, thus all of them have the same IPC. The exception being Cannon Lake, which saw some µarch changes as well as being fabbed on 10 nm. I believe Ian had a laptop with a Cannon Lake Core i3-8121U in it, so maybe he could comment on any IPC gains he saw with that part. Reply
  • abufrejoval - Tuesday, August 28, 2018 - link

    A process shrink is like having 20 fingers instead of 10: Just try to imagine how that would make you work double in the same time.

    The shrinks are hard, but turning shrinks into more of the existing work getting done in the same time frame is very, very, very hard when they have spent the last dozens of shrinks already performing that miracle. And in their cases they can't even change the ISA and maintain backward compatibility.

    We have Spectre and Meltdown precisely because in their desperate search for more performance they went perhaps too far. Please put your mind into the problem before you demand the unreasonable.
  • HStewart - Wednesday, August 29, 2018 - link

    "We have Spectre and Meltdown precisely because in their desperate search for more performance they went perhaps too far. Please put your mind into the problem before you demand the unreasonable."

    That is not true - The 8705g in my Dell XPS 15 2in1 has Intel MPX which is designed to detect buffer overflow or underflow.


    The Scalable Xeons have MBE - and I believe new one coming have something even better directly related to hardware fixes on Spectre and Meltdown


    Basically what it comes down to is that Intel does NOT require 10nm to fix Spectre and Meltdown issues in hardware,

    Actually I think we have Spectre and Meltdown because there is people that hate Intel so much, they would find ways to attempt destroy them with bugs that have yet surface in real life - but in end it also includes AMD and ARM.
  • HStewart - Thursday, August 30, 2018 - link

    I think the important thing is note that this is an Opinion. And I believe everything is opinion on the net except possible with technical details on a product that has been officially release.

    My opinion is that Intel has done a very smart thing with these chips and I believe up and coming Laptops / Tablets will be both smaller and longer battery life. They are doing this not because of Threat of AMD - but actually with because Windows on ARM laptops. But of course even the Y chip will be faster than ARM laptops - making Windows on ARM useless. Probably a key factor in Microsoft decision for not using it in Surface Go.
  • klagermkii - Tuesday, August 28, 2018 - link

    Why does Intel use the FireWire logo on their chipset diagram to represent SATA? Is there some kind of overlap? Reply
  • IntelUser2000 - Tuesday, August 28, 2018 - link

    That's not a FireWire symbol. It's different.

    FireWire: https://developer.apple.com/softwarelicensing/agre...
    USB: https://www.quora.com/Who-designed-the-USB-symbol-...

    Wait. Maybe you meant the SATA symbol uses Firewire. That's a flub on their part.
  • IntelUser2000 - Tuesday, August 28, 2018 - link

    Nevermind. Ignore the above. Reply
  • repoman27 - Tuesday, August 28, 2018 - link

    No, that's weird. I mean, the official SATA-IO logos are pretty lame, but subbing the FireWire logo was an odd choice. Reply

Log in

Don't have an account? Sign up now