Looking to the Future:
International Technology Roadmap for Semiconductors 2.0

The ten year anniversary of Conroe comes at a time when the International Technology Roadmap for Semiconductors report into the next 10-15 years of the industry has been officially launched to the public. This biennial report is compiled by a group of experts in the semiconductor industry from the US, Europe and Asia and is designed to help the industry dictate which path to focus R&D for the next 10-15 years, and runs for nearly 500 pages. While we could go into extensive detail about the contents, we plan to give a brief overview here. But for people interested in the industry, it’s a great read for sure.

The report includes deep discussions regarding test equipment, process integration, radio frequency implementations (RF), microelectromechanical systems (MEMs), photolithography, factory integration, assembly, packaging, environmental issues, improving yields, modeling/simulation and emerging materials. With a focused path to a number of technologies, the hope is that leading contenders in each part of the industry can optimize and improve efficiency in directional research and development, with the possibility of collaboration, rather than taking many different routes.

Obviously such a report is going to make successful and unsuccessful predictions, even with a group of experts, based on the introduction of moonshot style features (FinFET) or unforeseen limitations in future development. For example, here is the first roadmap published by the Semiconductor Industry Association in the first report in 1993:


Original 1993 Semiconductor Industry Association roadmap

As we can see, by 2007 it was predicted that we would be on 10nm 100nm chips with up to 20 million ‘gates’, up to 4GB of SRAM per chip and 1250mm2 of logic per die. Up to 400mm wafers were expected in this timeframe, with 200W per die and 0.002 defects per square cm (or 5.65 errors per 300mm wafer).

Compare that to 2016, where we have 16/14nm lithography nodes running 300mm wafers producing 15 billion transistors on a 610mm2 die (NVIDIA P100). Cache currently goes up to 60-65MB on the largest chips, and the power consumption of these chips (the ASIC power) is around 250W as well. So while the predictions were a slow on the lithography node, various predictions about the integration of components onto a base processor were missed (memory controllers, chipsets, other IO).

What makes the most recent report different is that it is listed as the last report planned by ITRS, to be replaced by a more generalized roadmap for devices and systems, the IRDS as the utility of semiconductors has changed over the last decade. In this last report, a number of predictions and focal points have been picked up by the media, indicating a true end to Moore’s Law and how to progress beyond merely shrinking lithography nodes beyond 7nm. Part of this comes from the changing landscape, the move to IoT and the demand for big data processing and storage, but also the decrease in the profitability/performance gain of decreasing node sizes in comparison to their cost to develop is, if believed, set to put a paradigm shift in integrated circuit development. This applies to processors, to mobile, to DRAM and other industry focal points, such as data centers and communications.

I do want to quote one part of the paper verbatim here, as it ties into the fundamental principles of the future of semiconductor engineering:

“Moore’s Law is dead, long live Moore’s Law”

The question of how long will Moore’s Law last has been posed an infinite number of times since the 80s and every 5-10 years publications claiming the end of Moore’s Law have appeared from the most unthinkable and yet “reputedly qualified” sources. Despite these alarmist publications the trend predicted by Moore’s Law has continued unabated for the past 50 years by morphing from one scaling method to another, where one method ended the next one took over. This concept has completely eluded the comprehension of casual observes that have mistakenly interpreted the end of one scaling method as the end of Moore’s Law. As stated before, bipolar transistors were replaced by PMOS that were replaced by NMOS that were also replaced by CMOS. Equivalent scaling succeeded Geometrical Scaling when this could not longer operate and now 3D Power Scaling is taking off.

By 2020-25 device features will be reduces to a few nanometers and it will become practically impossible to reduce device dimensions any further. At first sight this consideration seems to prelude to the unavoidable end of the integrated circuit era but once again the creativity of scientists and engineers has devised a method ‘To snatch victory from the jaws of defeat’.

Core: Performance vs. Today Looking To The Future: 450mm Wafers in 2021, and Down to ‘2nm’
Comments Locked

158 Comments

View All Comments

  • Ian Cutress - Tuesday, August 2, 2016 - link

    To clarify, there was a typo in Johan's original review of the microarchitecture, specifically stating:

    'However, Core is clearly a descendant of the Pentium Pro,'

    I've updated the article to reflect this, and was under the assumption that my source was correct at the point of doing my research.
  • wumpus - Tuesday, August 2, 2016 - link

    Except that the Pentium Pro was the first chip with the P6 architecture. Pentium 2 was pretty much pentium pro with MMX, a higher clock rate, and slower [off chip but on slot] L2 cache. Pentium 3 was the same with more clock, more MMX (possibly SSE), and on chip (full speed) L2 cache.

    While I'd have to assume they would pull all the files from the Pentium 3 plans, I'd still call it "pentinium pro based" because that was where nearly all the architecture came from (with minor upgrades and bug fixes to the core in 2 and 3).

    I'm still curious as to exactly how they pulled it off. My main theory is that they duplicated the block diagram of the P6, and then verified each block was correct (and exactly duplicated the P6 at a higher speed), then used the P6 verification to prove that if the blocks were all correct, they had a correct chip.
  • zodiacfml - Thursday, July 28, 2016 - link

    Same here. I thought it was the design of the Pentium M (from Israel team) they got the Core from. It was that time that AMD is beating Intel's P4's in performance, efficiency, and price. After a few months, articles were posted with people able to overclock a Pentium M with the characteristics of the AMD CPU and, of course, beating Pentium 4's at much lower clock speeds. From there, the Intel Core was born out of the Pentium M's which is essentially the same only with higher TDP and clock speeds. Then came, the Core Duo, then the Core 2 Duo.

    I just can't remember where I read it though.
  • marty1980 - Wednesday, July 27, 2016 - link

    I started college in electrical engineering; moved to software after an ee class using c++. I was very excited and confident in a DIY PC. I knew the Core 2 was on its way. I gathered parts from whatever computers I could scratch together; power supply, case, DVD drive, network card(s), HDDs ... Everything but Mobo, CPU, GPU and RAM - the brains.

    I bought an E6400 2.13GHz with a gigabyte mobo, 4GB 800MHz DDR2 and a Radeon x1650 Pro.

    I just retired the CPU and Mobo in 2012/13 when I experimented with my current PC; an AMD APU + Ded GPU (dual graphics).

    I'm excited to be looking at a future replacement for my PC. We're on the horizon of some interesting changes that I don't even understand (what was his article about? Lol).
  • just4U - Thursday, July 28, 2016 - link

    I seem to recall from a casual glance at an article (on this site) back some 9 years ago.. That intel basically got lucky, or fluked as it were.. Something to do with what they were doing with the PentiumM which caused them to move away from the P3-4 stuff.. hum.. damned if I can remember though what it was about.
  • FourEyedGeek - Tuesday, August 9, 2016 - link

    Pentium 3 architecture was having difficulties increasing performance so they replaced it with Pentium 4s Netburst. They had their Israel department continue work on Pentium 3 that turned into the Pentium M.
  • Hazly79 - Thursday, July 28, 2016 - link

    surprised that my 2005-Pentium D 3ghz still can run Diablo 3 (2012) at minimum setting pair with Nvidia GT 710 ($35 card )

    Really great optimization from Blizzard ent. team...
  • AnnonymousCoward - Thursday, July 28, 2016 - link

    Yeah, but too bad the game sucks. Jay doubled it.
  • name99 - Thursday, July 28, 2016 - link

    Two points:

    Firstly macro-op fusion is hardly an x86 exclusive these days. Many (all?) ARMv8 CPUs use it, as do the most recent POWER CPUs. Like the x86 case, it's used to fuse together pairs of instructions that commonly co-occur. Compare and branch is a common example, but other common examples in RISC are instruction pairs that are used to create large constants in a register, or to generate large constant offsets for loads/stores.

    Secondly you suggest that the ROB is an expensive data structure. This is misleading. The ROB itself is primarily a FIFO and can easily be grown. The problem is that storing more items in the ROB requires more physical registers and more load/store queue entries, and it is THESE structures that are difficult and expensive to grow. This suggests that using alternative structures for the load/store queues, and alternative mechanisms for scavenging physical registers could allow for much larger ROBs, and in fact Intel has published a lot of work on this (but has so far done apparently nothing with this research, even though the first such publications were late 90s --- I would not be surprised if Apple provides us with a CPU implementing these ideas before Intel does).
  • Ian Cutress - Tuesday, August 2, 2016 - link

    It wasn't written about to the exclusion of all other microarchitectures, it was written about focusing on x86 back in 2006. At the time, the ROB was described as expensive by Intel, through I appreciate that might have changed.

Log in

Don't have an account? Sign up now