For years now, motherboard manufacturers have been struggling to find other markets to branch out to, in an attempt to diversify themselves, preparing for inevitable consolidation in the market. Every year at Computex, we'd hear more and more about how the motherboard business was getting tougher and we'd see more and more non-motherboard products from these manufacturers. For the most part, the non-motherboard products weren't anything special. Everyone went into making servers, then multimedia products, then cases, networking, security, water cooling; the list goes on and on.

This year's Computex wasn't very different, except for one thing. When Gigabyte showed us their collection of goodies for the new year, we were actually quite interested in one of them. And after we posted an article about it, we found that quite a few of you were very interested in it too. Gigabyte's i-RAM was an immediate success and it wasn't so much that the product was a success, but it was the idea that piqued everyone's interests.

Pretty much every time a faster CPU is released, we always hear from a group of users who are marveled by the rate at which CPUs get faster, but loathe the sluggish rate that storage evolves. We've been stuck with hard disks for decades now, and although the thought of eventually migrating to solid state storage has always been there, it's always been so very distant. These days, you can easily get a multi-gigabyte solid state drive if you're willing to spend the tens of thousands of dollars it costs to get one; prices actually vary from the low $1000s to the $100K range for solid state devices, obviously making them impractical for desktop users.

The performance benefits of solid state storage have always been tempting. With no moving parts, reliability is improved tremendously, and at the same time, random accesses are no longer limited by slow and difficult to position read/write heads. While sequential transfer rates have improved tremendously over the past 5 years, thanks to ever increasing platter densities among other improvements, it is the incredibly high latency that makes random accesses very expensive from a performance standpoint for conventional hard disks. A huge reduction in random access latency and increase in peak bandwidth are clear performance advantages to solid state storage, but until now, they both came at a very high price.

The other issue with solid state storage is that DRAM is volatile, meaning that as soon as power is removed from the drive, all of your data would be lost. More expensive solutions get around this by using a combination of a battery backup as well as a hard disk that keeps a backup of all data written to the solid state drive, just in case the battery or main power should fail.

Recognizing the allure of solid state storage, especially to performance-conscious enthusiast users, Gigabyte went about creating the first affordable solid state storage device, and they called it i-RAM.

By utilizing conventional DDR memory modules, Gigabyte's i-RAM is a lot cheaper to implement than more conventional solid state devices. Gigabyte sells you the card, and it's up to you to populate it with memory - a definite plus for those of us who happen to have a lot of older memory laying around, especially after next year's transition to DDR2 for AMD platforms.

The backup issue is solved by the use of a battery pack that is charged by your system on the fly, although there is no disk backup available for the i-RAM.

Through some custom logic, the i-RAM works and acts just like a regular SATA hard drive. But how much of a performance increase is there for desktop users? And is the i-RAM worth its still fairly high cost of entry? We've spent the past week trying to find out...

We All Scream for i-RAM
Comments Locked

133 Comments

View All Comments

  • jconan - Wednesday, July 27, 2005 - link

    Of all the disk intensive apps I could think of aren't Bit Torrents a bit disk intensive? Would I-Ram make a good match for Bit Torrents?
  • robmueller - Tuesday, July 26, 2005 - link

    I agree with the people who mention server uses for this product. There are already quite a few products like this around in the server space, but they are all VERY expensive. There's a comprehensive list here:

    http://www.storagesearch.com/ssd-buyers-guide.html">http://www.storagesearch.com/ssd-buyers-guide.html

    The one thing to note, most of these are flash based drives, which means they retain their data, but are actually quite slow transfer speed wise. When it comes to pure performance solutions (which are usually DRAM with battery and/or HD backup), there's only a couple of companies:

    http://www.umem.com/Umem_NVRAM_Cards.html">http://www.umem.com/Umem_NVRAM_Cards.html
    http://www.superssd.com/default.asp">http://www.superssd.com/default.asp
    http://www.curtisssd.com/products/">http://www.curtisssd.com/products/
    http://www.cenatek.com/product_rocketdrive.cfm">http://www.cenatek.com/product_rocketdrive.cfm
    http://www.hyperossystems.co.uk/07042003/products....">http://www.hyperossystems.co.uk/07042003/products....
    http://www.taejin.co.kr/english/product_intro.html">http://www.taejin.co.kr/english/product_intro.html

    We've been long time users of micro memory products, and in general they've been great. We place database journals, filesystem journals, and general server "hot" files on the device and get great performance out of it.

    The biggest issue with most of these is price and support. Rocket Drive is Windows only (we have Linux srevers). HyperDrive doesn't appear to be shipping yet (we ordered one and haven't heard anything). Jetspeed I've never even been able to get a sensible reply from. Curtis seem to be focussing on fibre channel (their SCSI interface drive is now quite old, only 80MB/s), which means you need to spend an extra $1000 almost on just a controller. RamSan are incredibly expensive and FC only, but apparently have amazing performance as well. Umem does have a Linux driver, but Umem are no longer selling their retail, they are only selling wholesale to big storage vendors that use them in their products.

    So that basically left us really interested in iRAM as a potential long term replacement for for Umem in new servers we buy. It's a pity that the apparent performance is a bit lacking. On the other hand, the biggest advtange of RAM based drives is the latency reduction. Basically you can write and have your data commited to "permanent" storage and move along with the next task straight away. This is the whole point of database/filesystem journals. It would be great to test the iRAM with real server scenarios that rely on this low latency ability. Rerunning the database tests with a combination of journal and full database on the drive would be really interesting.

    http://www.anandtech.com/IT/showdoc.aspx?i=2447">http://www.anandtech.com/IT/showdoc.aspx?i=2447

    Basically it seems that this is a really hard product to sell. There's definitely a market for it in the server space, but most of the people who realise that are big DB/file system users, and are usually willing to spend more to get an "enterprise" like product. It would be really nice if all those "middle" users with database/filesystem/email issues could be shown how to use one of these to significantly extend the life/performance of one of their servers...
  • Scarceas - Tuesday, July 26, 2005 - link

    I see this as a much easier way to run your OS in RAM (hell, I don't think there is a way to run XP on a RAM partition).

    If you have 4GB of RAM, you can partition 3.5GB and run win9x in it. That leaves the max 512MB conventional RAM for 9x to work with. It takes a lot of work, but I think it is faster than this because you don't have the PCI bus constraint, and the RAM controller on a motherboard is probably flatout superior.

    It would be interesting to see a comparison...
  • Scarceas - Tuesday, July 26, 2005 - link

    Why did the 300mb file from the drive to itself take ~4 times as long as the 693MB file from the drive to itself?

    what am I missing?
  • Antiflash - Wednesday, July 27, 2005 - link

    It is a 300mb folder containing several files that could be located in diferent positions which means a more random access. The other is a unique file, it is larger but the data is read from adyacent positions in the disc. In the first case you have to add the overhead of the procesing time of the OS when dealing with several files.
  • JarredWalton - Wednesday, July 27, 2005 - link

    Actually, you need to make it a bit more clear: it's the Firefox source code, which is likely thousands of small files. It's not just a few or many, but *TONS* of little files. Even though the access times of the i-RAM are much lower than that of a standard HDD, there is still latency associated with the SATA bus and other portions of the system, so it's not "instantaneous". Three times as fast is still good, and that's relative to the Raptor - something like a 7200 RPM drive would be even slower relative to the i-RAM. Still, best case scenario for heavy IO seems to suggest the current i-RAM is only about 3X faster than a good HDD setup. Good but not great.
  • - Tuesday, July 26, 2005 - link

    There's only one comment so far in this entire thread that really touches on where the i-Ram is truly going to succeed, and a few posters flirt with the notion in an offhanded manner.

    The benefits of an i-Ram would really come out during I/O intensive operations, as in high volumes of reads and writes, without really being high data transfer volumes, which is the case for a lot of database operations. A lot of the tests performed in the article really had a focus of large volume data retrieval, and that's like using the haft of a katana to hammer in a nail.

    Think about web bulletin boards like PHP-nuke, Slashcode, PHPBB, any active dynamic website that is constantly accessing a database to load user preferences, banner ads, static images. Forum posting, article retrieval, content searching, etc. An applicable consumer example would be putting your web browser's cache on the I-Ram, or your mail or news reader's data files, or dumping a copy of your entire document's folder to it, then using Windows' search function to dig through them all for all occurences of "the". Throw a squid cache on it. Put your innoDB transaction log on it. Hell, for that matter, slot a handful of these and use them as innoDB raw partitions for your data.

    The kinds of tests you need to perform to make an I-Ram shine would be high volumes of simultaneous searches across the entire volume, the kind of act that would make a regular disk drive grind to a screaming halt in a fit of schizophrenic head twitching. This isn't video editing, OS booting (with exceptions), game loading, or most of the scenarios commented on above. It's still a SATA drive. Your bulk data isn't going to transfer any faster, but you *can* find it quicker and open, update, and close your files faster. Leverage *those* strengths and stop thinking it's a RocketDrive.
  • Bensam123 - Tuesday, July 26, 2005 - link

    All my concerns on this product were pretty much addressed
    -SATA2
    -5.25" Bay drive instead of PCI slot
    -Using a 4pin Molex connector or SATA power connector instead
    -PCI-E instead of SATA (drivers are made everyday)

    A few comments I have on this product that weren't mentioned. Everyone talked about putting these into a Raid0 array to improve size but no one mentioned that it could very well double performance. I don't know what's causing the current bottle necks with these cards besides the SATA interface but that just doesn't seem right. Anand needs to run benchmarks like Sisoft File System Benchmark or HD Tach to narrow it down. Read/Write/Sequential and Random should all be almost instaneous only limited by the bandwidth of SATA and the bridge it is attached to. This card could very well be limited by the chipset they tried it on (southbridge/northbridge interconnet). It might be even faster on a chipset that lacks a southbridge and only has a northbridge such as the nForce4.

    Given the nature of this product I don't know why motherboard manufacturers just don't add this right onto a board or make a special adapter for it you can buy (with a better interface). I could see alot more use in something like this if the dimms were attached right to my board and straight to my notherbridge.

    What Gigabyte should've done (all companies with a bright idea should do this) is just give this to review sites such as Anand and others just to see what feedback emerges before they try to market something like this. I guess Gigabyte is sort of doing this by only producing 1,000 but that's still 1,000 more then they need to. If my guess is correct the second revision of this product should follow quite shortley after this one hits the market.

    As was mentioned the price is a killer (I would rather get a SCSI320 controller and a 15,000 RPM Cheetah).
  • nullpointerus - Tuesday, July 26, 2005 - link

    The bandwidth, which could have really blown SATA drives out of the water in certain tasks, is obviously crippled by its attachment to SATA. Yet if i-RAM was running at full PCI Express speed, then I should think opening the specs for the memory controller would quickly lead to open source drivers. The storage is, after all, cheap DDR sticks.

    Sure, these drivers might be written for Linux or BSD instead of Windows, but surely porting GPL'd drivers to Windows would be easy for a company which can open the specs? nVidia and ATI have proprietary drivers because they claim it would be suicide for them to open up their proprietary chip interfaces. But i-RAM?
  • nullpointerus - Tuesday, July 26, 2005 - link

    I thought that compilation would make a good application for this. Source code, intermediate, and output files take up less than 4 GB. The large amount of small text files involved should allow the i-RAM's random access performance advantage to really shine. Add to that the fact that long compiles can take several hours - or days if you are building Gentoo, for example - and the difference should be quite noticable. Yet there don't seem to be any compiler tests in this article. Maybe they simply aren't I/O limited?

Log in

Don't have an account? Sign up now