Looking to the Future:
International Technology Roadmap for Semiconductors 2.0

The ten year anniversary of Conroe comes at a time when the International Technology Roadmap for Semiconductors report into the next 10-15 years of the industry has been officially launched to the public. This biennial report is compiled by a group of experts in the semiconductor industry from the US, Europe and Asia and is designed to help the industry dictate which path to focus R&D for the next 10-15 years, and runs for nearly 500 pages. While we could go into extensive detail about the contents, we plan to give a brief overview here. But for people interested in the industry, it’s a great read for sure.

The report includes deep discussions regarding test equipment, process integration, radio frequency implementations (RF), microelectromechanical systems (MEMs), photolithography, factory integration, assembly, packaging, environmental issues, improving yields, modeling/simulation and emerging materials. With a focused path to a number of technologies, the hope is that leading contenders in each part of the industry can optimize and improve efficiency in directional research and development, with the possibility of collaboration, rather than taking many different routes.

Obviously such a report is going to make successful and unsuccessful predictions, even with a group of experts, based on the introduction of moonshot style features (FinFET) or unforeseen limitations in future development. For example, here is the first roadmap published by the Semiconductor Industry Association in the first report in 1993:


Original 1993 Semiconductor Industry Association roadmap

As we can see, by 2007 it was predicted that we would be on 10nm 100nm chips with up to 20 million ‘gates’, up to 4GB of SRAM per chip and 1250mm2 of logic per die. Up to 400mm wafers were expected in this timeframe, with 200W per die and 0.002 defects per square cm (or 5.65 errors per 300mm wafer).

Compare that to 2016, where we have 16/14nm lithography nodes running 300mm wafers producing 15 billion transistors on a 610mm2 die (NVIDIA P100). Cache currently goes up to 60-65MB on the largest chips, and the power consumption of these chips (the ASIC power) is around 250W as well. So while the predictions were a slow on the lithography node, various predictions about the integration of components onto a base processor were missed (memory controllers, chipsets, other IO).

What makes the most recent report different is that it is listed as the last report planned by ITRS, to be replaced by a more generalized roadmap for devices and systems, the IRDS as the utility of semiconductors has changed over the last decade. In this last report, a number of predictions and focal points have been picked up by the media, indicating a true end to Moore’s Law and how to progress beyond merely shrinking lithography nodes beyond 7nm. Part of this comes from the changing landscape, the move to IoT and the demand for big data processing and storage, but also the decrease in the profitability/performance gain of decreasing node sizes in comparison to their cost to develop is, if believed, set to put a paradigm shift in integrated circuit development. This applies to processors, to mobile, to DRAM and other industry focal points, such as data centers and communications.

I do want to quote one part of the paper verbatim here, as it ties into the fundamental principles of the future of semiconductor engineering:

“Moore’s Law is dead, long live Moore’s Law”

The question of how long will Moore’s Law last has been posed an infinite number of times since the 80s and every 5-10 years publications claiming the end of Moore’s Law have appeared from the most unthinkable and yet “reputedly qualified” sources. Despite these alarmist publications the trend predicted by Moore’s Law has continued unabated for the past 50 years by morphing from one scaling method to another, where one method ended the next one took over. This concept has completely eluded the comprehension of casual observes that have mistakenly interpreted the end of one scaling method as the end of Moore’s Law. As stated before, bipolar transistors were replaced by PMOS that were replaced by NMOS that were also replaced by CMOS. Equivalent scaling succeeded Geometrical Scaling when this could not longer operate and now 3D Power Scaling is taking off.

By 2020-25 device features will be reduces to a few nanometers and it will become practically impossible to reduce device dimensions any further. At first sight this consideration seems to prelude to the unavoidable end of the integrated circuit era but once again the creativity of scientists and engineers has devised a method ‘To snatch victory from the jaws of defeat’.

Core: Performance vs. Today Looking To The Future: 450mm Wafers in 2021, and Down to ‘2nm’
Comments Locked

158 Comments

View All Comments

  • perone - Friday, July 29, 2016 - link

    My E6300 is still running fine in a PC I have donated to a friend.
    It was set to 3GHz within a few days from purchase and never moved from that speed.
    Once or twice I changed the CPU fan as it was getting noisy.

    Great CPU and great motherboard the Asus P5B
  • chrizx74 - Saturday, July 30, 2016 - link

    These PCs are still perfectly fine if you install an SSD. I did it recently on an Acer Aspire t671 desktop. After modding the bios to enable AHCI I put an 850 evo (runs at sata 2 speed) and a pretty basic Nvidia GFX card. The system turned super fast and runs Windows 10 perfectly fine. You don't need faster processors all you need is get rid of the HDDs.
  • Anato - Saturday, July 30, 2016 - link

    I'm still running AMD Athlon x2 4850 2.5GHz as a file server + MythTV box. It supports ECC, is stable and has enough grunt to do its job so why replace. Yes, I could get bit energy efficiency but in my climate >50% of time heating is needed and new hardware has its risks of compatibility issues etc.

    +10 for anandtech again, article was great as always!
  • serendip - Sunday, July 31, 2016 - link

    I'm posting this on a Macbook with an E6600 2.4 GHz part. It's still rockin' after six years of constantly being tossed into a backpack. The comparisons between C2D and the latest i5 CPUs don't show how good these old CPUs really are - they're slow for hard number crunching and video encoding but they're plenty fast for typical workday tasks like Web browsing and even running server VMs. With a fast SSD and lots of RAM, processor performance ends up being less important.

    That's too bad for Intel and computer manufacturers because people see no need to upgrade. A 50% performance boost may look like a lot on synthetic benchmarks but it's meaningless in the real world.
  • artifex - Monday, August 1, 2016 - link

    "With a fast SSD and lots of RAM, processor performance ends up being less important."

    I remember back when I could take on Icecrown raids in WoW with my T7200-based Macbook.
    And I actually just stopped using my T7500-based Macbook a few months ago. For a couple years I thought about seeing if an SSD would perk it back up, but decided the memory bandwidth and size limitation, and graphics, was just not worth the effort. Funny that you're not impressed by i5s; I use a laptop with an i5-6200U, now. (Some good deals with those right now, especially if you can put up with the integrated graphics instead of a discrete GPU.) But then, my Macbooks were about 3 years older than yours :)
  • abufrejoval - Sunday, July 31, 2016 - link

    Replaced three Q6600 on P45 systems with socket converted Xeon X5492 at $60 off eBay each. Got 3.4GHz Quads now never using more than 60 Watts under Prime95 (150 Watts "official" TDP), with 7870/7950 Radeon or GTX 780 running all modern games at 1080p at high or ultra. Doom with Vulkan is quite fun at Ultra. Got my kids happy and bought myself a 980 ti off the savings. If you can live with 8GB (DDR2) or 16GB (DDR3), it's really hard to justify an upgrade from this 10 year old stuff.

    Mobile is a different story, of course.
  • seerak - Monday, August 1, 2016 - link

    My old Q6600 is still working with a friend.

    The laugher is that he (used to) work for Intel, and 6 months after I gave it to him in lieu of some owed cash, he bought a 4790K through the employee program (which isn't nearly as good as you'd think) and built a new system with it.

    The Q6600 works so well he's never gotten around to migrating to the new box - so the 4790k is still sitting unused! I'm thinking of buying it off him. I do 3D rendering and can use the extra render node.
  • jeffry - Monday, August 1, 2016 - link

    Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.
  • wumpus - Thursday, August 4, 2016 - link

    Look up what Amazon (and anybody else buying a server) pays for the rest of the computer and tell me they won't pay $800 (per core) to double the computer's speed. It isn't a question of cost, Intel just can't do it (and nobody else can make a computer as fast as Intel, although IBM seems to be getting close, and AMD might get back in the "almost as good for cheap" game).
  • nhjay - Monday, August 1, 2016 - link

    The Core 2 architecture has served me well. Just last year I replaced my server at home which was based on a Core 2 Duo E6600 on a 965 chipset based motherboard. The only reason for the upgrade is that the CPU was having a difficult time handling transcoding jobs to several Plex clients at once.

    The desktop PC my kids use is Core 2 based, though slightly newer. Its a Core 2 Quad Q9400 based machine. It is the family "gaming" PC if you dare call it that. With a GT 730 in it, it runs the older games my kids play very well and Windows 10 hums along just fine.

Log in

Don't have an account? Sign up now