For years now, motherboard manufacturers have been struggling to find other markets to branch out to, in an attempt to diversify themselves, preparing for inevitable consolidation in the market. Every year at Computex, we'd hear more and more about how the motherboard business was getting tougher and we'd see more and more non-motherboard products from these manufacturers. For the most part, the non-motherboard products weren't anything special. Everyone went into making servers, then multimedia products, then cases, networking, security, water cooling; the list goes on and on.

This year's Computex wasn't very different, except for one thing. When Gigabyte showed us their collection of goodies for the new year, we were actually quite interested in one of them. And after we posted an article about it, we found that quite a few of you were very interested in it too. Gigabyte's i-RAM was an immediate success and it wasn't so much that the product was a success, but it was the idea that piqued everyone's interests.

Pretty much every time a faster CPU is released, we always hear from a group of users who are marveled by the rate at which CPUs get faster, but loathe the sluggish rate that storage evolves. We've been stuck with hard disks for decades now, and although the thought of eventually migrating to solid state storage has always been there, it's always been so very distant. These days, you can easily get a multi-gigabyte solid state drive if you're willing to spend the tens of thousands of dollars it costs to get one; prices actually vary from the low $1000s to the $100K range for solid state devices, obviously making them impractical for desktop users.

The performance benefits of solid state storage have always been tempting. With no moving parts, reliability is improved tremendously, and at the same time, random accesses are no longer limited by slow and difficult to position read/write heads. While sequential transfer rates have improved tremendously over the past 5 years, thanks to ever increasing platter densities among other improvements, it is the incredibly high latency that makes random accesses very expensive from a performance standpoint for conventional hard disks. A huge reduction in random access latency and increase in peak bandwidth are clear performance advantages to solid state storage, but until now, they both came at a very high price.

The other issue with solid state storage is that DRAM is volatile, meaning that as soon as power is removed from the drive, all of your data would be lost. More expensive solutions get around this by using a combination of a battery backup as well as a hard disk that keeps a backup of all data written to the solid state drive, just in case the battery or main power should fail.

Recognizing the allure of solid state storage, especially to performance-conscious enthusiast users, Gigabyte went about creating the first affordable solid state storage device, and they called it i-RAM.

By utilizing conventional DDR memory modules, Gigabyte's i-RAM is a lot cheaper to implement than more conventional solid state devices. Gigabyte sells you the card, and it's up to you to populate it with memory - a definite plus for those of us who happen to have a lot of older memory laying around, especially after next year's transition to DDR2 for AMD platforms.

The backup issue is solved by the use of a battery pack that is charged by your system on the fly, although there is no disk backup available for the i-RAM.

Through some custom logic, the i-RAM works and acts just like a regular SATA hard drive. But how much of a performance increase is there for desktop users? And is the i-RAM worth its still fairly high cost of entry? We've spent the past week trying to find out...

We All Scream for i-RAM
Comments Locked

133 Comments

View All Comments

  • NStriker - Thursday, July 28, 2005 - link

    Anand quotes $90 per GB of RAM here, but I'm wondering if the I-Ram works with the much cheaper high-density junk you see out there all the time. Like 128Mx4 modules. On motherboards, usually only SiS chipsets can handle that type of RAM, but there's no reason the Xilinx FPGA couldn't.

    Right now I'm seeing 1GB of that stuff for $63.
  • jonsin - Thursday, July 28, 2005 - link

    Since Athlon64 north bridge no need the memory controller. Why shouldn't the original memory controller used for iRam purpose. By supporting both SDRam and DDR Ram, people can make use of their old RAM (which no longer useful nowadays) and make it as Physical Ram Drive.

    Spare some space for additional DDR module slot on motherboard exclusively for iRam, and additional daughter card can be added for even more Slots.

    Would it be a cheaper solution for iRam ultimately ?
  • jonsin - Thursday, July 28, 2005 - link

    And more, power can be directly drive from ATA power in motherboard. By implementing similar approach to iRam, an extra battery can power the ram for certain hours.

    By enabling north bridge to be DDR/SDRam capability is not a new technology, every chipset compnay have such tech. They can just stick the original memory controller with lower performance (DDR200, so more moudle can be supported and lower cost) to north bridge, the cost overhead is relatively small.

    What I think the extra cost comes from extra motherboard layout, north bridge die size, chipset packaging cost (more pins). I suppose it can cost as low as $20 ?
  • jonsin - Thursday, July 28, 2005 - link

    More, the original SATA physical link can be omitted as the controller in North Bridge can communicate directory to SATA controller internally (South bridge thru HT ?) In this case, would the performance increate considerably and the overall layout more tidy ? (no need external cable and cards)
  • mindless1 - Friday, July 29, 2005 - link

    NO these are all problems. The purpose is to have a universal platform support that is gentle on power consumption. That means a tailored controller and even then we're seeing the main limit is the battery. "Tidy" is an unimportant human desire, particularly less important inside a closed PC case. All they have to do is route bus traces well on the card and be done.
  • slumbuk - Wednesday, July 27, 2005 - link

    HP sell an add on for their DL 380 server for $200 (at discount) that gets you 128MB of disk write cache... makes a good system also fast for disk writes.

    This card could be used by linux vendors to enable file-system data and control logging for similar money for GB(s) of write cache... Cheap, reliable, fast general purpose file servers.. that have fast disk write speed without risking data loss.. Speed meaning no disk-head latency, no rotational latency - just transfer time.

    It would sell better with ECC memory.. or the ability to use two cards in a mirror.. at least to careful server buyers..

  • slumbuk - Wednesday, July 27, 2005 - link

    You could set up the iRam drive as the journal device for Resier or Ext-3 logged file systems - and log both control info and data - for fast, safe systems without too much fuss.

    I think I want one - but not as much as I want other stuff..
  • AtaStrumf - Wednesday, July 27, 2005 - link

    Interesting but hardly useful for most. Kind of makes sense to only make 1000, but of course that's where the $150 price tag comes from.
  • rbabiak - Wednesday, July 27, 2005 - link

    i guess it would add to the base board cost, but a SATA controller on the PCI card would make it a littl nicer as then you are not takeing up one of your SATA channels, i only have 2 and they are current both used for a Raid-0

    Also if they made the PCI card a SATA interface and then short circeted the backend to conect directly to the memory, wouldn't they then be able to get much higher transfer speeds than sata and yet all the existint SATA divers could be used with it, given they emulate a existing SATA interface.
  • DerekWilson - Thursday, July 28, 2005 - link

    Better to use the onboard ports ...

    a 33MHz/32bit PCI slot only grants a max of 133MB/sec. This would make the PCI bus a limiting factor to the SATA controller.

    Step beyond that and remember that the PCI bus is shared among all your PCI cards. Depending on the motherboard some onboard devices can be built onto the PCI bus.

    With bandwidth on current southbridge chips already being dedicated to SATA (or SATA-II), it would be a waste in more ways than one to build a SATA controller into the i-RAM.

    That's my take on it anyway.

    Derek Wilson

Log in

Don't have an account? Sign up now