Looking To The Future: 450mm Wafers in 2021, and Down to ‘2nm’

Most users following the industry will know that the name of the lithographic node, such as ‘14nm’, ‘16nm’ or ‘28nm’ is often an imperfect characterization of what is actually happening on the silicon. For example, Intel’s 14nm node is widely touted as the best in the industry, and the equivalent 14nm/16nm nodes are said to not be even close, due to Intel’s focus on several generations of FinFET compared to the other main semiconductor manufacturers. This also means that while ‘14nm’ or ‘16nm’ is mentioned, many other parts of the silicon could be manufactured on other nodes, depending on the utility.

We’ve spoken in the past about equipment roadmaps to deal with the problem of shrinking lithography node sizes. This includes new lasers, droplet generation, collectors, masks, defect detection, lenses, and the technology to drive high throughput. For example, here are ASML’s EUV requirements for the next few manufacturing nodes, including requirements for 250W light sources at a 5nm node:

With the ITRS report, it makes a couple of metrics regarding node sizes and wafers. On the later, we get the prediction that 450mm wafers should be in play at around 2021 for DRAM:

This matches at DRAM half-pitch of 15nm, or a DRAM cell size of 780nm2, compared to 3480nm2 today. The movement to 450mm wafers has been expected for some time, with early predictions going for the 2008-2011 timeframe, however development seems to have stalled for a variety of reasons.

When it comes to transistors, given the node naming scheme caveats listed above, the ITRS report predicts that FinFET will make way at around the 2021 timeframe in favor of 3D stacking such as ‘vertical gate all-around’ (VGAA). As the node shrinks, as well as the transistor size, the power supply voltage will drop from 800mV today down to 400/450mV by 2027.

Moving into a world of IoT will require even lower power transistors, of which moving to a ‘smaller node’ and vertical chip integration will be a large part.

The ITRS report goes into a lot of detail regarding different stacking technologies, such as the 2.5D integration we see on products like AMD’s Fiji GPU involving a silicon interposer, as well as 3D and the different ways to implement both microchannel cooling in such chips as well as 3D TSV applications.

Looking to the Future: International Technology Roadmap for Semiconductors 2.0 Report Looking To The Future: NAND Flash Scales Up to 64 TB SSDs in 2030
Comments Locked

158 Comments

View All Comments

  • Jon Tseng - Wednesday, July 27, 2016 - link

    Great chip. Only just upgraded from my QX6850 last month. Paired with a GTX 970 it was doing just fine running all new games maxed out at 1080p. Amazing for something nearly a decade old!!
  • Negative Decibel - Wednesday, July 27, 2016 - link

    my E6600 is still kicking.
  • tarqsharq - Wednesday, July 27, 2016 - link

    My dad still uses my old E8400 for his main PC. He's getting my old i7-875k soon though.
  • jjj - Wednesday, July 27, 2016 - link

    You can't do DRAM in glasses, not in a real way. Since that's what mobile is by 2025.
    On-package DRAM is next year or soon not 2025.
    You can't have big cores either and you need ridiculous GPUs and extreme efficiency. Parallelism and accelerators, that's where computing needs to go, from mobile to server.
    We need 10-20 mm3 chips not 100cm2 boards. New NV memories not DRAM and so on.
    Will be interesting to see who goes 3D first with logic on logic and then who goes 3D first as the default in the most advanced process.

    At the end of the day, even if the shrinking doesn't stop, 2D just can't offer enough for the next form factor. Much higher efficiency is needed and the size of a planar chip would be far too big to fit in the device while the costs would be mad.Much more is needed. For robots too.The costs and efficiency need to scale and with planar it's at best little.
  • wumpus - Thursday, August 4, 2016 - link

    On package DRAM seems to be a "forever coming" tech. AMD Fury-X basically shipped it, and it went nowhere. I'm guessing it will be used whenever Intel or IBM feel it can be used for serious advantage on some high-core server chip, or possibly when Intel want to build a high-speed DRAM cache (with high-speed-bus) and use 3dXpoint for "main memory".

    The slow rollout is shocking. I'm guessing nvidia eventually gave up with it and went with tiling (see the Kanter demo on left, but ignore the thread: nothing but fanboys beating their chests).
  • willis936 - Wednesday, July 27, 2016 - link

    I'm certainly no silicon R&D expert but I'm very skeptical of those projections.
  • Mr.Goodcat - Wednesday, July 27, 2016 - link

    Typo:
    "On the later, we get the prediction that 450nm wafers should be in play at around 2021 for DRAM"
    450nm wafers would be truly interesting ;-)
  • wumpus - Thursday, August 4, 2016 - link

    I like the rapidly falling static safety. Don't breathe on a 2030 chip.
  • faizoff - Wednesday, July 27, 2016 - link

    My first Core 2 Duo was an E4400 that I bought in 2007 I believe, thing lasted me up to 2011 when I upgraded to an i5 2500k. I should've kept that C2D just for nostalgia's sake, I used it intermittently as a plex server and that thing worked great on FreeNAS. The only issue was it was really noisy and would get hot.
  • Notmyusualid - Thursday, July 28, 2016 - link

    I've got a few old servers kicking around, all with valid Win server licenses, but due to UK electricity costs, just can't bring myself to have them running at home 24/7 just to serve a backup, or yet another Breaking Bad viewing session... :) which we can do locally now.

Log in

Don't have an account? Sign up now