Inside the Drive: 2x Density Flash and more DRAM

We of course had to pop the top and see what's changed inside the drive. As soon as you get the cover off you realize exactly what Intel has done:


The old X25-M G1


The new X25-M G2

Both of these are 160GB drives; Intel is now using 16GB flash packages instead of 8GB packages from the original drive. Once 34nm production really ramps up, Intel could outfit the back of the PCB with 10 more chips and deliver a 320GB drive. I wouldn't expect that anytime soon though.

The controller-side of the PCB looks similar, although Intel stopped using the black goup they used to cover all of the NAND flash contacts of the original drive. I wonder if that was a manufacturing measure or something to prevent competitors from hooking up an oscilloscope to the pins on the flash and reverse engineering the controller...


The old X25-M G1

It could have also been a thermal expansion thing; Microsoft had to use a similar approach to help prevent Xbox 360s from red-ringing.


The new X25-M G2

The new controller comes in the same physical package as the old one:


The old controller


The new controller

The part numbers have changed. The old one was an Intel PC29AS21AA0, the new one is an Intel PC29AS21BA0.

Intel also swapped vendors for the X25-M's on-board DRAM. The old 160GB drive used a 16MB Samsung 166MHz SDRAM (CAS3):


Goodbye Samsung

The new 160GB G2 drive uses a 32MB Micron 133MHz SDRAM (CAS3):


Hello Micron

DRAM size went up, while clock speed went down. I wonder what Intel is doing with all of that additional DRAM on the new drives? Hmm...

Index The Competitors & Pricing
Comments Locked

87 Comments

View All Comments

  • Robear - Thursday, July 23, 2009 - link

    Thanks for the quick update. I'm on the edge of my seat with these new drives. The suspense reminds me of when Conroe first debuted ;)

    Given the apparent quality of the controller, I'd like to assume that the low sequential writes were intentional to some degree. I'm sure Intel's engineers had to make some design decisions, and it appears as if they've chosen to sacrifice sequential writes in most (if not all) cases in favor of random writes.

    I think Intel is on the right track with the controllers on these drives. If you look at desktop usage patterns, your random reads / writes reign supreme. Sequential writes are the most infrequent operations.

    Anyway, that's based on the assumption that random write performance and sequential write performance are mutually exclusive somehow (supported by the X-25E benches...).


    VERY interesting.

    I'm also very interested to see how interfaces and controllers try to keep up with the drastic increase in storage bandwidth for the enterprise. The current mass storage architecture is mature and versatile. Going straight PCI-E seems to be a step backwards in architecture in exchange for raw performance. It seems to me to be an immediate stop-gap, and I'm not sure how many serious companies will buy into this fusion-IO thing long-term.

    I'd personally rather have 7 SLC's in a RAID 5EE over two redundant PCI-E cards and one hot-spare. It's far more cost efficient, and I think everyone will agree hot-swapping SAS/SATA is a lot easier than hot-swapping an internal card.

    All and all, very exciting.
  • glugglug - Thursday, July 23, 2009 - link

    PCI-E Will be even be an even shorter term stop gap than most people realize.

    PCI-E x1 bandwidth is the same as regular PCI: 133MBps.
    So PCI-E x4 like the Fusion I/O uses is actually slightly below the SATA 3.0 600MBps spec that will be out soon.
  • glugglug - Thursday, July 23, 2009 - link

    Actually I just looked this up, its rated higher than I thought, especially since they doubled it with PCI-express 2.0.

    2.0 is 500MB/s per lane, so theoretically the PCI-e 4x cards could get up to 2GB/s

    Still with the rate these things are improving I think that is 2 years away.
  • iwodo - Thursday, July 23, 2009 - link

    You really dont need 4x Slot. PCI - E 2.0 2x Slot already gives you 1GB/s
    Since PCI-E Express 3.0 is coming at 1GB/s single Slot. I think 2x for compatibility is reasonable enough.
  • araczynski - Thursday, July 23, 2009 - link

    ...this is basically saying the drives are great (assuming the price is much better than the X25-E) as long as you're not moving large files around?

    i.e. the relatively infrequent software installations wouldn't be optimal, but otherwise it would be quite a good drive? or like an OS drive basically?

    can you throw in an average 7200 rpm hard drive into the mix for a relative comparison?
  • araczynski - Thursday, July 23, 2009 - link

    ooops, never mind, forgot about the velociraptor in there :)
  • bobsmith1492 - Thursday, July 23, 2009 - link

    Perhaps a log scale would be appropriate to show the orders of magnitude in difference between the drives!
  • iwodo - Thursday, July 23, 2009 - link

    The 64Gb 8Gx8 MLC cost average 12.5. Ofcoz since Intel are making the Flash themselves ( or Joint Venture ) they are already making a profit on Flash. 10 of those = 125. I think controller may be a 90nm tech cost around $15. Again Intel is making profit on the controller as well. Packaging and DRAM etc, the 80GB SSD should cost $160 to make.

    I believe that NAND price is still based on 50NM - 40NM price. So 34nm should cost less. Hopefully in a years time it will cost 50% less.

    In 2010, SSD should finally take off.
  • KenAF - Thursday, July 23, 2009 - link

    Has Intel committed to supporting TRIM with a firmware update on G2?
  • smjohns - Thursday, July 23, 2009 - link

    Yeah Intel have confirmed that TRIM support will be delivered as part of a firmware upgrade to be released when Windows7 supports this. Apparently for XP & Vista machines this will also require supporting software to be installed as neither have inbuilt TRIM functionality.

    Not so good news for the original G1 drives as it does not seem that Intel will be releasing similar firmware for these.

Log in

Don't have an account? Sign up now