i-RAM as a Paging Drive

One question that we've seen a lot is whether or not the i-RAM can be used to store your pagefile. Since the i-RAM behaves just like a regular hard drive, Windows has no problem using it to store your pagefile, so the "can you" part of that question is easily answered. The real question happens to be, "should you?"

We have heard arguments on both sides of the fence; some say that Windows inefficiently handles memory and inevitably pages to disk even when you have memory to spare, while others say that you'd be stupid to put your pagefile on an i-RAM rather than just add more memory to your system. So, which is it?

Unfortunately, this is the type of thing that's difficult to benchmark, but it is the type of thing that's pretty easy to explain if you just sit down and use the product. We set up a machine, very similar to how we would a personal system, but tended to focus on memory hogs - web pages with lots of Flash, Photoshop, etc. Of course, we opened them all up at once, switched between the applications, used them independently, simultaneously, basically whatever we could do to stress the system as it normally would be stressed.

At the same time, we monitored a number of things going on - mainly the size of the pagefile, the amount of system memory used, the frequency of disk accesses, pagefile usage per process... basically everything we could get our hands on through perfmon to give us an idea if Windows was swapping to disk or not.

The end result? There was no real tangible performance difference between putting more memory in the system and using the hard disk for the pagefile or putting less memory in the system and using the i-RAM for the pagefile. Granted, if we had a way of measuring the overall performance, it would have shown that we would be much better off with more memory in the system (it runs faster, and it is accessed much quicker than off the i-RAM).

The only benefit that we found to using the i-RAM to store our pagefile was if you happened to have a couple GBs of older DDR200 memory lying around; that memory would be useless as your main system memory in a modern machine, but it'd make a lot better of a pagefile than a mechanical hard disk.

One more situation we encountered that would benefit from storing your pagefile on the i-RAM was those seemingly random times when Windows swaps to disk for no reason. But for the most part, our system was slower when we had less memory and stored the swapfile in it than when we had more memory and less swap file.

Adobe Photoshop is a slightly different creature as it keeps a scratch disk that is separate from the Windows pagefile. We tested Photoshop and used the i-RAM as our scratch disk, but in all cases it always made more sense to just throw more memory at Photoshop to improve performance where we ran out of memory. If the operations you're performing in Photoshop can fit into system memory, then you'll never touch the scratch disk.

Overall, based on our testing, the i-RAM doesn't make much sense as a paging drive unless you have the spare memory. The problem with "spare" DDR200 memory is that it is most likely in small 64MB, 128MB or maybe 256MB sizes, which doesn't buy you much space on an i-RAM drive. For most people, you're much better off just tossing more memory in your system.

i-RAM Pure I/O Performance i-RAM as a boot drive
POST A COMMENT

133 Comments

View All Comments

  • jconan - Wednesday, July 27, 2005 - link

    Of all the disk intensive apps I could think of aren't Bit Torrents a bit disk intensive? Would I-Ram make a good match for Bit Torrents? Reply
  • robmueller - Tuesday, July 26, 2005 - link

    I agree with the people who mention server uses for this product. There are already quite a few products like this around in the server space, but they are all VERY expensive. There's a comprehensive list here:

    http://www.storagesearch.com/ssd-buyers-guide.html">http://www.storagesearch.com/ssd-buyers-guide.html

    The one thing to note, most of these are flash based drives, which means they retain their data, but are actually quite slow transfer speed wise. When it comes to pure performance solutions (which are usually DRAM with battery and/or HD backup), there's only a couple of companies:

    http://www.umem.com/Umem_NVRAM_Cards.html">http://www.umem.com/Umem_NVRAM_Cards.html
    http://www.superssd.com/default.asp">http://www.superssd.com/default.asp
    http://www.curtisssd.com/products/">http://www.curtisssd.com/products/
    http://www.cenatek.com/product_rocketdrive.cfm">http://www.cenatek.com/product_rocketdrive.cfm
    http://www.hyperossystems.co.uk/07042003/products....">http://www.hyperossystems.co.uk/07042003/products....
    http://www.taejin.co.kr/english/product_intro.html">http://www.taejin.co.kr/english/product_intro.html

    We've been long time users of micro memory products, and in general they've been great. We place database journals, filesystem journals, and general server "hot" files on the device and get great performance out of it.

    The biggest issue with most of these is price and support. Rocket Drive is Windows only (we have Linux srevers). HyperDrive doesn't appear to be shipping yet (we ordered one and haven't heard anything). Jetspeed I've never even been able to get a sensible reply from. Curtis seem to be focussing on fibre channel (their SCSI interface drive is now quite old, only 80MB/s), which means you need to spend an extra $1000 almost on just a controller. RamSan are incredibly expensive and FC only, but apparently have amazing performance as well. Umem does have a Linux driver, but Umem are no longer selling their retail, they are only selling wholesale to big storage vendors that use them in their products.

    So that basically left us really interested in iRAM as a potential long term replacement for for Umem in new servers we buy. It's a pity that the apparent performance is a bit lacking. On the other hand, the biggest advtange of RAM based drives is the latency reduction. Basically you can write and have your data commited to "permanent" storage and move along with the next task straight away. This is the whole point of database/filesystem journals. It would be great to test the iRAM with real server scenarios that rely on this low latency ability. Rerunning the database tests with a combination of journal and full database on the drive would be really interesting.

    http://www.anandtech.com/IT/showdoc.aspx?i=2447">http://www.anandtech.com/IT/showdoc.aspx?i=2447

    Basically it seems that this is a really hard product to sell. There's definitely a market for it in the server space, but most of the people who realise that are big DB/file system users, and are usually willing to spend more to get an "enterprise" like product. It would be really nice if all those "middle" users with database/filesystem/email issues could be shown how to use one of these to significantly extend the life/performance of one of their servers...
    Reply
  • Scarceas - Tuesday, July 26, 2005 - link

    I see this as a much easier way to run your OS in RAM (hell, I don't think there is a way to run XP on a RAM partition).

    If you have 4GB of RAM, you can partition 3.5GB and run win9x in it. That leaves the max 512MB conventional RAM for 9x to work with. It takes a lot of work, but I think it is faster than this because you don't have the PCI bus constraint, and the RAM controller on a motherboard is probably flatout superior.

    It would be interesting to see a comparison...
    Reply
  • Scarceas - Tuesday, July 26, 2005 - link

    Why did the 300mb file from the drive to itself take ~4 times as long as the 693MB file from the drive to itself?

    what am I missing?
    Reply
  • Antiflash - Wednesday, July 27, 2005 - link

    It is a 300mb folder containing several files that could be located in diferent positions which means a more random access. The other is a unique file, it is larger but the data is read from adyacent positions in the disc. In the first case you have to add the overhead of the procesing time of the OS when dealing with several files. Reply
  • JarredWalton - Wednesday, July 27, 2005 - link

    Actually, you need to make it a bit more clear: it's the Firefox source code, which is likely thousands of small files. It's not just a few or many, but *TONS* of little files. Even though the access times of the i-RAM are much lower than that of a standard HDD, there is still latency associated with the SATA bus and other portions of the system, so it's not "instantaneous". Three times as fast is still good, and that's relative to the Raptor - something like a 7200 RPM drive would be even slower relative to the i-RAM. Still, best case scenario for heavy IO seems to suggest the current i-RAM is only about 3X faster than a good HDD setup. Good but not great. Reply
  • - Tuesday, July 26, 2005 - link

    There's only one comment so far in this entire thread that really touches on where the i-Ram is truly going to succeed, and a few posters flirt with the notion in an offhanded manner.

    The benefits of an i-Ram would really come out during I/O intensive operations, as in high volumes of reads and writes, without really being high data transfer volumes, which is the case for a lot of database operations. A lot of the tests performed in the article really had a focus of large volume data retrieval, and that's like using the haft of a katana to hammer in a nail.

    Think about web bulletin boards like PHP-nuke, Slashcode, PHPBB, any active dynamic website that is constantly accessing a database to load user preferences, banner ads, static images. Forum posting, article retrieval, content searching, etc. An applicable consumer example would be putting your web browser's cache on the I-Ram, or your mail or news reader's data files, or dumping a copy of your entire document's folder to it, then using Windows' search function to dig through them all for all occurences of "the". Throw a squid cache on it. Put your innoDB transaction log on it. Hell, for that matter, slot a handful of these and use them as innoDB raw partitions for your data.

    The kinds of tests you need to perform to make an I-Ram shine would be high volumes of simultaneous searches across the entire volume, the kind of act that would make a regular disk drive grind to a screaming halt in a fit of schizophrenic head twitching. This isn't video editing, OS booting (with exceptions), game loading, or most of the scenarios commented on above. It's still a SATA drive. Your bulk data isn't going to transfer any faster, but you *can* find it quicker and open, update, and close your files faster. Leverage *those* strengths and stop thinking it's a RocketDrive.
    Reply
  • Bensam123 - Tuesday, July 26, 2005 - link

    All my concerns on this product were pretty much addressed
    -SATA2
    -5.25" Bay drive instead of PCI slot
    -Using a 4pin Molex connector or SATA power connector instead
    -PCI-E instead of SATA (drivers are made everyday)

    A few comments I have on this product that weren't mentioned. Everyone talked about putting these into a Raid0 array to improve size but no one mentioned that it could very well double performance. I don't know what's causing the current bottle necks with these cards besides the SATA interface but that just doesn't seem right. Anand needs to run benchmarks like Sisoft File System Benchmark or HD Tach to narrow it down. Read/Write/Sequential and Random should all be almost instaneous only limited by the bandwidth of SATA and the bridge it is attached to. This card could very well be limited by the chipset they tried it on (southbridge/northbridge interconnet). It might be even faster on a chipset that lacks a southbridge and only has a northbridge such as the nForce4.

    Given the nature of this product I don't know why motherboard manufacturers just don't add this right onto a board or make a special adapter for it you can buy (with a better interface). I could see alot more use in something like this if the dimms were attached right to my board and straight to my notherbridge.

    What Gigabyte should've done (all companies with a bright idea should do this) is just give this to review sites such as Anand and others just to see what feedback emerges before they try to market something like this. I guess Gigabyte is sort of doing this by only producing 1,000 but that's still 1,000 more then they need to. If my guess is correct the second revision of this product should follow quite shortley after this one hits the market.

    As was mentioned the price is a killer (I would rather get a SCSI320 controller and a 15,000 RPM Cheetah).
    Reply
  • nullpointerus - Tuesday, July 26, 2005 - link

    The bandwidth, which could have really blown SATA drives out of the water in certain tasks, is obviously crippled by its attachment to SATA. Yet if i-RAM was running at full PCI Express speed, then I should think opening the specs for the memory controller would quickly lead to open source drivers. The storage is, after all, cheap DDR sticks.

    Sure, these drivers might be written for Linux or BSD instead of Windows, but surely porting GPL'd drivers to Windows would be easy for a company which can open the specs? nVidia and ATI have proprietary drivers because they claim it would be suicide for them to open up their proprietary chip interfaces. But i-RAM?
    Reply
  • nullpointerus - Tuesday, July 26, 2005 - link

    I thought that compilation would make a good application for this. Source code, intermediate, and output files take up less than 4 GB. The large amount of small text files involved should allow the i-RAM's random access performance advantage to really shine. Add to that the fact that long compiles can take several hours - or days if you are building Gentoo, for example - and the difference should be quite noticable. Yet there don't seem to be any compiler tests in this article. Maybe they simply aren't I/O limited? Reply

Log in

Don't have an account? Sign up now