Estimating Die Size

Disclaimer: Although we have close and ready contact with ATI and NVIDIA, the fact remains that some of the more technical issues concerning actual architecture and design are either closely guarded or extremely obscured to the public. Thus we attempt to estimate some die sizes and transistor counts based on information we already know - and some of these estimations are slightly incorrect.

One of the pieces of information a lot of people might like to know is the die size of the various graphics chips. Unfortunately, ATI and NVIDIA are pretty tight-lipped about such information. Sure, you could rip the heatsink off of your graphics card and get a relatively good estimate of the die size, but unless you've got some serious cash flow, this probably isn't the best idea. Of course, some people have done that for at least a few chips, which will be somewhat useful later. Without resorting to empirical methods of measuring, though, how do we estimate the size of a processor?

Before getting into the estimating portions, let's talk about how microprocessors are made, as it is rather important. When a chip is built up, it starts as a simple ingot of silicon cut into wafers on which silicon dioxide is grown. This silicon dioxide is cut away using photolithography in order to expose the silicon in certain parts. Next, polysilicon is laid down and etched, and the exposed silicon is doped (ionized). Finally, another mask is added with smaller connections to the doped areas and the polysilicon, resulting in a layer of transistors, with three contacts for each newly created transistor. After the transistors are built up, metal layers are added to connect them in the fashion required for the chip. These metal layers are not actually transistors but are connections between transistors that form the "logic" of the chip. They are a miniaturized version of the metal wires you can see in a motherboard.

Microprocessors will of course require multiple layers, but the transistors are on the one polysilicon layer. Modern chips typically have between 15 and 20 layers, although we really only talk about the metal layers. In between each set of metal layers is a layer of insulation, so we usually end up with 6 to 9 metal layers. On modern AMD processors, there are 8 metal layers and the polysilicon layer. On Intel processors, there are 6 to 8 metal layers plus the polysilicon layer, depending on the processor: i.e. 6 for Northwood, 7 on Prescott and 8 on most of their server/workstation chips like the Gallatin.

Having more layers isn't necessarily good or bad; it's simply a necessary element. More complex designs require more complex routing, and since two crossing wires cannot touch each, they need to run on separate layers. Potentially, having more metal layers can help to simplify the layout of the transistors and pack them closer together, but it also adds to the cost as there are now more steps in the production, and more layers results in more internal heat. There are trade offs that can be made in many areas of chip production. In AMD's case, where they only have 200 mm wafers compared to the 300 mm wafers that Intel currently uses, adding extra layers in order to shrink the die size and/or increase speeds would probably be a good idea.

Other factors also come into play, however. Certain structures can be packed more densely than others. For example, the standard SRAM cell used in caches consists of six transistors and is one of the smaller structures in use on processors. This means that adding a lot of cache to a chip won't increase the size as quickly as adding other types chip logic. The materials used in the various layers of a chip can also affect the speed at which the chip can run as well as the density of the transistors and routing in the other metal layers. Copper interconnects conduct electricity better than aluminum, for instance, and the Silicon On Insulator (SOI) technology pioneered by IBM can also have an impact on speed and chip size. Many companies are also using low-k dielectric materials, which can help gates to switch faster. All of these technologies add to the cost of the chip, however, so it is not necessarily true that a chip which uses, i.e. low-k dielectric, will be faster and cheaper to produce than a chip without it.

What all this means is that there is no specific way to arrive at an accurate estimate of die size without having in-depth knowledge of the manufacturing technologies, design goals, costs, etc. Such information is usually a closely guarded secret for obvious reasons. You don't want to let your competitors know about your plans and capabilities any sooner than necessary. Anyway, we now have enough background information to move on to estimating die sizes.

If we're talking about 130 nm process technology, how many transistors of that thickness would fit in 1 mm? Easy enough to figure out: 1 mm / .00013 mm = 7692 T/mm (note that .00013 mm = 130 nm). If we're working in two dimensions, we square that value: 59166864 T/mm2 ("transistors" is abbreviated to "T"). This is assuming square or circular transistors, which isn't necessarily the case, but it is close enough. So, does anyone actually think that they can pack transistors that tightly? No? Good, because right now that's a solid sheet of metal. If 59 million T/ mm2 is the maximum, what is a realistic value? To find that out, we need to look at some actual processors.

The current Northwood core has 55 million transistors and is 131 mm2. That equals 419847 T/mm2, assuming uniform distribution. That sounds reasonable, but how does it compare with the theoretical packing of transistors? It's off by a factor of 141! Again, assuming uniform distribution of materials, it means that 11.9 times (the square root of 141) as much empty space is present in each direction as the actual metal of the transistors. Basically, electromagnetic interference (EMI) and other factors force chip designers to keep transistors and traces a certain distance apart. In the case of the P4, that distance is roughly 11.9 times the process technology in both width and depth. (We ignore height, as the insulation layers are several times thicker than this). So, we'll call this value of 11.9 on the Northwood the "Insulation Factor" or "IF" of the design.

We now have a number we can use to derive die size, given transistor counts and process technology:

Die Size = Transistor Count / (1 / ((Process in mm) * IF)^2)

Again, notice that the process size is in millimeters, so that it matches with the standard unit of measurement for die size. Using the Northwood, we can check our results:

Die Size = 55000000 / (1 / ((0.00013) * 11.9)^2)
Die Size = 131.6 mm2

So that works, but how do we know what the IF is on different processors? If it were a constant, things would be easy, but it's not. If we have a similar chip, though, the values will hopefully be pretty similar as well. Looking at the Barton core, it has 54.3 million transistors in 101 mm2. That gives it 537624 T/ mm2, which is obviously different than the Northwood, with the end IF being 10.5. Other 130 nm chips have different values as well. Part of the reason may be due to differences in counting the number of transistors. Transistor counts are really a guess, as not all of the transistors within the chip area are used. Materials used and other factors also come into play. To save time, here's a chart of IF values for various processors (based on their estimated transistor counts), with averages for the same process technology included.

Calculated Process Insulation Values
AMD
K6880000025068516000000129411.76123.63611.119
K6-2930000025081616000000114814.81139.35511.805
K6-321300000250135716000000157777.78101.40810.070
Argon22000000250184716000000119565.22133.81811.568
Average for 250 nm 124.55411.141
Pluto/Orion22000000180102730864198215686.27143.09811.962
Spitfire25000000180100730864198250000.00123.45711.111
Morgan25200000180106730864198237735.85129.82611.394
Thunderbird37000000180117730864198316239.3297.5989.879
Palomino37500000180129830864198290697.67106.17310.304
Average for 180 nm 120.03010.930
Thoroughbred A3750000013080859171598468750.00126.23311.235
Thoroughbred B3750000013084959171598446428.57132.54411.513
Barton54300000130101959171598537623.76110.06110.491
Sledgehammer SOI105900000130193959171598548704.66107.83910.385
Average for 130 nm 119.16910.906
San Diego SOI105900000901149123456790928947.37132.90011.528
Intel
Deschutes750000025011851600000063559.32251.73315.866
Katmai950000025013151600000072519.08220.63214.854
Mendocino19000000250154616000000123376.62129.68411.388
Average for 250 nm 200.68314.036
Coppermine First28100000180106630864198265094.34116.42710.790
Coppermine Last2810000018090630864198312222.2298.8539.942
Willamette42000000180217630864198193548.39159.46512.628
Average for 180 nm 124.91511.120
Tualatin2810000013080659171598351250.00168.46012.979
Northwood First55000000130146659171598376712.33157.07412.533
Northwood Last55000000130131659171598419847.33140.93611.872
Average for 130 nm 155.49012.461
Prescott1250000009011271234567901116071.43110.61710.517
ATI
RV3507500000013091859171598824175.8271.7958.473
Nvidia
NV1023000000220110820661157209090.9198.8149.941
Average Insulation Factors
250 nm 12.588
220 nm 9.941
180 nm 11.025
150 nm 10.819
130 nm 10.613
90 nm       11.023

Lacking anything better than that, then, we will use the averages of the Intel and AMD values for the matching ATI and NVIDIA chips, with a little discretionary rounding to keep things simple. In cases where we have better estimates on die size, we will derive the IF and use those same IF values on the other chips from the same company. Looking at the numbers, the IF for AMD and Intel chips tends to range between 10 on a mature process up to 16 for initial chips on a new process. The two figures from GPUs are much lower than the typical CPU values, so we will assume GPUs tend to have more densely packed transistors (or else AMD and Intel are less aggressive in counting transistors).

These initial IF values could be off by as much as 20%, which means the end results could be off by as much as 44%. (How's that, you ask? 120% squared = 144%.) So, if this isn't abundantly clear yet, you should take these values with a HUGE dose of skepticism. If you have a better reference to an approximate die size (i.e. a web site with an images and/or die size measurements), please send an email or post a comment. Getting accurate figures would be really nice, but it is virtually impossible. Anyway, here are the IF values used in the estimates, with a brief explanation of why they were used.

Chipset IF Notes
NV1x 10.0 Size is ~110 mm2
NV2x 10.00 No real information and this seems a common value for GPUs of the era.
NV30, NV31 10.00 Initial use of 130 nm was likely not optimal.
NV34 9.50 Use of mature 150 nm process.
NV35, NV36, NV38 9.5 Size is ~207 mm2
NV40 8.75 Size is ~288 mm2
NV43 9.50 Initial use of 110 nm process will not be as optimal as 130 nm.
R300, R350, R360 9.00 Mature 150 nm process should be better than initial results.
RV350, RV360, RV380 8.50 Size is ~91 mm2
RV370 9.00 No real information, but assuming the final chip will be smaller than RV360. Otherwise 110 nm is useless.
R420 9.75 Size is ~260 mm2
Other ATI Chips 10.00 Standard guess lacking any other information.

Note also that there are reports that ATI is more conservative in transistor counts, so their 160 million could be equal to 180 or even 200 million of NVIDIA's transistors. Basically, transistor counts are estimates, and ATI is more conservative while NVIDIA likes to count everything they can. Neither is "right", but looking at die sizes, the 6800 is not much larger than the X800, despite a supposed 60 million transistor weight advantage. Either the IBM 130 nm fabs are not as advanced as the TSMC 130 nm fabs, or ATI's transistor counts are somewhat low, or NVIDIA's counts are somewhat high - most likely it's a combination of all these factors.

So, those are the values we'll use initially for our estimates. The most recent TSMC and IBM chips are using 8 metal layers, and since it does not really affect the estimates, we have put 8 metal layers on all of the GPUs. Again, if you have a source that gives an actual die size for any of the chips other than the few that we already have, please send them to us, and we can update the charts.

Seven, seven for n-n-no tomorrow Now the really hairy stuff
Comments Locked

43 Comments

View All Comments

  • Neo_Geo - Tuesday, September 7, 2004 - link

    Nice article.... BUT....
    I was hoping the Quadro and FireGL lines would be included in the comparison.
    As someone who uses BOTH proffessional (ProE and SolidWorks) AND consumer level (games) software, I am interested in purchasing a Quadro or FireGL, but I want to compare these to their consumer level equivalent (as each pro level card generally has an equivalent consumer level card with some minor, but important, otomizations).

    Thanks
  • mikecel79 - Tuesday, September 7, 2004 - link

    The AIW 9600 Pros have faster memory than the normal 9600 Pro. 9600 Pro memory runs at 650Mhz vs the 600 on a normal 9600.

    Here's the Anandtech article for reference:
    http://www.anandtech.com/video/showdoc.aspx?i=1905...
  • Questar - Tuesday, September 7, 2004 - link

    #20,

    This list is not complete at all, it would be 3 times the size if it was from the last 5 or 6 years. It covers about the last 3, and is laden with errors

    Just another exampple of half-asssed job this site has been doing lately.
  • JarredWalton - Tuesday, September 7, 2004 - link

    #14 - Sorry, I went with desktop cards only. Usually, you're stuck with whatever comes in your laptop anyway. Maybe in the future, I'll look at including something like that.

    #15 - Good God, Jim - I'm a CS graduate, not a graphics artist! (/Star Trek) Heheh. Actually, you would be surprised at how difficult it can be to get everything to fit. Maximum width of the tables is 550 pixels. Slanting the graphics would cause issues making it all fit. I suppose putting in vertical borders might help keep things straight, but I don't like the look of charts with vertical separators.

    #20 - Welcome to the club. Getting old sucks - after a certain point, at least.
  • Neekotin - Tuesday, September 7, 2004 - link

    great read! wow! i didn't know there were so much GPUs in the past 5-6 years. its like more than all combined before them. guess i'm a bit old.. ;)
  • JarredWalton - Tuesday, September 7, 2004 - link

    12/13: I updated the Radeon LE entry and resorted the DX7 page. I'm sure anyone that owns a Radeon LE already knows this, but you could use a registry hack to turn them into essentially a full Radeon DDR. (By default, the Hierarchical Z compression and a few other features were disabled.) Old Anandtech article on the subject:

    http://www.anandtech.com/video/showdoc.aspx?i=1473
  • JarredWalton - Monday, September 6, 2004 - link

    Virge... I could be wrong on this, but I'm pretty sure some of the older chips could actually be configured with either SDR or DDR RAM, and I think the GF2 MX series was one of those. The problem was that you could either have 64-bit DDR or 128-bit SDR, so it really didn't matter which you chose. But yeah, there were definitely 128-bit SDR versions of the cards available, and they were generally more common than the 64-bit DDR parts I listed. The MX200, of course, was 64-bit SDR, so it got the worst of both worlds. Heh.

    I think the early Radeons had some similar options, and I'm positive that such options existed in the mobile arena. Overall, though, it's a minor gripe (I hope).
  • ViRGE - Monday, September 6, 2004 - link

    Jarred, without getting too nit-picky, your data for the GeForce 2 MX is technically wrong; the MX used a 128bit/SDR configuration for the most part, not a 64bit/DDR configuration(http://www.anandtech.com/showdoc.aspx?i=1266&p... Note that this isn't true for any of the other MX's(both the 200 and 400 widely used 64bit/DDR), and the difference between the two configurations has no effect on the math for memory bandwidth, but it's still worth noting.
  • Cygni - Monday, September 6, 2004 - link

    Ive been working with Adrian's Rojak Pot on a very similar chart to this one for awhile now. Check it out:

    http://www.rojakpot.com/showarticle.aspx?artno=88&...
  • Denial - Monday, September 6, 2004 - link

    Nice article. In the future, if you could put the text at the top of the tables on an angle it would make them much easier to read.

Log in

Don't have an account? Sign up now