Light Sources

While many other technologies are tangentially related to shrinking feature size, one of the primary ways within the past four decades or so has been through smaller wavelengths of light. Shorter wavelengths mean higher resolution, much like how the electron microscope’s shorter de Broglie wavelength increased resolution over light microscopes. Thus, photolithography can increase in resolution by using light sources that generate shorter wavelengths.

Lithography Wavelength vs Resolution / Guiding light / CC BY SA

This progression has happened steadily over the years, starting with mercury lamps that produced UV light of around 400 nanometers. Once this was no longer sufficient, lasers became necessary in order to drive higher resolution and rate of production. The first was the krypton fluoride laser that generated a wavelength of 248nm, then argon fluoride to generate 193nm. Unfortunately, this is near the limit of what can be realistically used in an environment that contains air, as even 193nm is attenuated significantly. As seen in the photo below, in order to go lower for EUV and similar wavelengths, the lithography process must be done in a vacuum as otherwise air will absorb almost all of the energy emitted.

Photon Energy vs. Resolution / Guiding light / CC BY SA

Immersion Lithography

Immersion Lithography / Renesas

So the inevitable question is what could be done next. While different foundries adopt technologies at different times, one way to push resolution further is immersion lithography, which was done around the 65nm node to 32nm node. This is relatively simple, as what this effectively does is increase the numerical aperture of the optical system because the light from the source can be refracted better than before. This is done by immersing the wafer and projection lens in extremely pure water.

Of course, this is far from a simple task in practice. The deionized water must also have no gases present that could cause air bubbles between the lens and wafer, with extremely consistent temperature and pressure. Otherwise, the actual index of refraction in the water will change unexpectedly and cause defects in the lithography process. The 193nm light used in current lithography processes can also ionize the water and in turn cause reactions with the photoresist.

Unfortunately, with processes like EUV lithography it’s no longer possible to use this method to drive higher resolution and smaller feature size because the liquid used will generally absorb all of the energy emitted. However, it’s still possible to drive higher resolution with 193i technology by using fluids with a higher index of refraction, which is an area of exploration for further resolution increases.

Multiple Patterning

Multiple Patterning / SPIE

Another technique that can be used with EUV lithography is multiple patterning. While it’s “one” technique, the ways to implement multiple patterning are numerous. All have the same goal though. In essence, if a theoretical system can only provide sufficient resolution to draw two lines 64nm apart, it’s possible to double the resolution by printing another two lines 64nm apart by doing a second exposure. The result is four lines that are 32nm apart. There are a few ways to achieve this, which are known as litho-etch, litho-etch (LELE), litho-freeze, litho-etch (LFLE), and self-aligned double patterning (SADP).

LELE is rather simple in implementation and relies upon two separate photoresist layers. In the first pass, lithography is done on a hard mask, then developed. It’s important to use a hard mask, because in the second pass another layer of photoresist is applied, then exposed and developed. If there wasn’t a hard mask, the first pass would simply disappear when the second layer of photoresist is applied. LFLE is simply a modification of the LELE technique, where the hard mask is eliminated. After the initial lithography process, the photoresist is frozen by coating it with a chemical agent that is then baked and developed away, making it so that the first resist layer is separable from the second layer. Once this is done, a standard lithography pass is done to complete the process.

LELE & SOAP Patterning / SPIE

SADP is a very different way of doing things, but the end result is the same. The first step is doing a lithography/etch of dummy patterns that become the actual lines that are intended to be etched on the final pattern. Once this is done, a hard mask or similar material is deposited over the dummy patterns. After this, the hard mask is etched to expose the sidewalls that line the dummy patterns. Once all of this is completed, the dummy pattern is developed away and the exposed oxide is etched as usual. The result is that the lines are twice as close as before, and only one lithography pass was needed.

Unlike LELE and LFLE, there’s no need to be concerned about alignment because after the dummy pattern is set up, there’s no second exposure. This may be the reason why Intel's 22nm FinFET uses this process. Because the sidewall spacers are often created with hard mask materials, the resulting lines are also cleaner. This fact will become important, especially when discussing EUV and similar next generation lithography techniques.

While these techniques may sound like the perfect way to increase resolution, ultimately multiple patterning becomes increasingly expensive and difficult, especially because even a small misalignment between the two patterns can result in a wasted wafer. Multiple patterning also causes design restrictions that wouldn't occur with a true resolution increase in the lithography process because certain patterns become impossible with even order or odd order patterning processes.

Computational Lithography

Phase Shifting Mask / Stanley H. Chan

While we’re still talking about (relatively) low hanging fruit, I want to cover two other methods that are used to enhance resolution. The first is phase shifting masks. Rather than focusing upon wafer-level changes, this improves the reticle/mask itself. In short, this exploits the wave nature of light. There are two types of phase-shifting masks, and the first is an alternating phase-shift mask. This alters the thickness of the mask in some regions, which induces a phase-shift on the light waves that pass through it. As a result, there is interference with light from unmodified regions, which means that higher contrast can be achieved between the exposed and unexposed regions. The other type is the attenuated phase-shift mask, which only lets small amounts of light pass through that can interfere with the light coming from transparent regions.

Source Notes: Intended pattern in blue, OPC-corrected in green, final pattern in red.

Optical Proximity Correction / LithoGuy

The second is optical proximity correction, or OPC. One of the imperfections in the lithography process that we haven’t talked about until now is that what is drawn on the photomask/reticle is not translated exactly on to the photoresist. In reality, line widths vary greatly depending upon how dense the pattern around the line is; lines don’t end where they do on the mask, and the ends of lines are much thicker than the middle. OPC compensates for all of these effects and computes the photomask needed to achieve a layout close to the intended design.

Review

Summarizing things again, there are multiple techniques used to increase resolution to fabricate smaller transistors. By using lower wavelengths of light, multiple patterning techniques, and computational lithography techniques like phase shifting masks and optical proximity correction, we’ve managed to make it all the way down to 22 nanometer feature sizes.  With the launch of Intel's Core M, we’ve made another jump down to 14 nanometers using the same light source that we did at 90 nanometers.

Manufacturing: From Wafer to Chip Improving Transistor Performance: SOI to FinFET
Comments Locked

77 Comments

View All Comments

  • danjw - Thursday, October 9, 2014 - link

    I am wondering if we haven't already hit the limits of current technology. TSMC doesn't seem to be able to produce anything more powerful than a SOC for phones and tablets on their 20nm process. While Intel seems to think they will have 14nm desktop and notebook parts out in the second quarter of next year, I wonder if they really will. Right now all we have seen from them at 14nm is similar to what TSMC is able to do at 20nm.

    This really concerns me as far as technology stagnating until the next big thing comes along.
  • Homeles - Thursday, October 9, 2014 - link

    There's no end in sight. About a year ago, perhaps a little earlier, the 7 or 5nm nodes were seen as the end of Moore's Law. Advancements have been made since then, though, and scaling past the 5nm node is very likely. 10nm's "recipe" is basically all finished at Intel at this point, with "all" that's left to do at this point being increasing the yields. EUV is making good progress, finally, and should be ready for insertion at Intel's 7nm node (if they skipped it for 10nm, which they likely did).

    14nm has been in production for quite some time now, with Broadwell first landing in tablets later this month. The chances that Broadwell won't make it to the desktop and notebook market around the middle of next year are essentially zero.

    We're still also on schedule to have 450mm wafers introduced by the end of the decade, which would reduce costs by ~30%.

    Even not looking at Intel, TSMC has millions of 20nm-based products on the market right now. 20nm is roughly twice as dense as its 28nm predecessor. It doesn't really make sense to be so skeptical of progress, given that the proof pudding has already been delivered.
  • danjw - Thursday, October 9, 2014 - link

    All those 20nm products are not desktop or notebook CPUs or GPUs, which they lead AMD and Nvidia to believe they would be able to do. Intel is way behind its original estimates to get Broadwell out, and that just in table SOCs. Intel wanted badly to get Broadwell parts out for the new school year, then it was Christmas, now it is Q2 of 2015. Yes, I think there is plenty of reason to be skeptical.
  • EMM81 - Monday, October 13, 2014 - link

    Your facts are all incorrect...If you can produce an SOC you already have all of the capability to produce GPU's or CPU's since it has logic, SRAM and graphics components already. Broadwell parts are already in the hands of vendors NOW and are being sold this year and they are not SOC's they are low power full core chips. 14nm Broadwell chips are in no way equivalent to 20nm TSMC chips. A fab can use the additional capability of a new node in different ways. They can reduce density to make chips cheaper to make, increase performance, decrease power and all of these things will be done in different ratios depending on the product. You need to work on detailed reasoning and apples to apples comparisons.
  • errorr - Thursday, October 9, 2014 - link

    The problem is that the only real advancements are more expensive per transistor and I doubt they will change.

    The great thing about process shrinks is that it reduces the cost per transistor.

    Also I doubt EUV will ever work. It has been almost ready and a couple years away for a decade. Tell me when you don't need MW levels of power to get usable light to a wafer and maybe I will consider it.

    450mm wafers have also been just a couple years away for at least 15 years. I'll believe the 2020 hype when I actually see it.

    The way to the next node is easy and everyone knows they can use triple patterning. Nobody wants that because of the expense is huge already and every single circuit would have to be redrawn due to limitations on the pokygons.

    I don't know what's next but the only people expecting EUV to pan out are the people who have spent billions trying to make it work and failing.
  • ShieTar - Friday, October 10, 2014 - link

    EUV is not "almost ready", it has been commercially available for 2 years now.

    http://www.zeiss.com/semiconductor-manufacturing-t...
  • Khenglish - Thursday, October 9, 2014 - link

    The problem with smaller processes is not physically producing them. The problem is that they start getting slower than larger processes. This is due to 2 reasons.

    1. Narrower, more resistive interconnects.
    2. Increase in channel doping levels due to not having enough dopant atoms to form a P-N junction as processes increase.

    1. As process sizes shrink, so do interconnects. Resistance is dependent on cross sectional area and length of the wire. As you shrink a process, the cross sectional area drops at a squared rate, while length drops at a linear rate. The end result is that wires become linearly more resistive as the process shrinks. Keeping wire length down due to resistance is also a big reason why individual core transistor counts have not been going up significantly. The cores need to stay small to keep the wire length down. Repeaters to boost current drive ability on long wires has been around starting at around 90nm.

    2. If you want a 10nm process, you probably have a gate length of 10nm. The volume of a 10nm cube is 10^-18 cm^-3. High doping levels are those above around 10^17 atoms per cm. If we take a high doping level of say 10^18 atoms per cm, then we have only one single dopant atom in the entire channel region of the transistor. This means that if you are one atom off, you lose a transistor, which is difficult not to do with over 1B transistors in a microprocessor. You can dope up to around 10^21 cm^-3, but then your electron and hole mobility are terrible. Mobility is directly proportional to how much current a transistor can push, so as mobility drops performance drops. See this link for what happens to mobility as dopant levels increase.

    http://ecee.colorado.edu/~bart/book/mobility.gif
  • abufrejoval - Saturday, October 11, 2014 - link

    From what I read the problem isn't as much phyiscal or technical feasability as economical viability: Moore's law was mostly about the ability to deliver more power at a lower price for the end consumer pushing the technology. Now the economical yields of process shrinks are diminishing to the point where further shrinks won't pay for themselves.
  • DanNeely - Thursday, October 9, 2014 - link

    If you're also interested in how simpler transistors are made; hack-a-day's hosted a video lecture from someone who was producing chips with a handful of transistors on them in her home lab a few years ago.

    http://hackaday.com/2010/03/10/jeri-makes-integrat...
  • anexanhume - Thursday, October 9, 2014 - link

    Graphene isn't dead in the water. There are ways to create a bandgap, for example using bilayer graphene and introducing a gap via electric fields or doping.

    http://www-als.lbl.gov/index.php/contact/56-bilaye...

    Then you can take advantage of its remarkable carrier mobility. Mass production remains a huge issue, and I also have concerns about its effect on living creatures and the environment. http://www.gizmag.com/graphene-bad-for-environment...

Log in

Don't have an account? Sign up now