Using the i-RAM

To begin our testing, we loaded up the i-RAM with four 1GB DDR400 sticks.  We didn't have any large DDR200 modules, so unfortunately we had to go with more modern DDR400.  Using DDR500, DDR400 or DDR200 doesn't change performance at all, since the Xilinx controller runs them all at the same frequency. 

With all four banks populated, we connected the i-RAM to our ASUS A8N-SLI Deluxe with a regular SATA cable and plugged the card into an open PCI slot. 

Powering the system on revealed the installation was a success; the BIOS reported the presence of the i-RAM as a regular storage device connected to our SATA controller:

Before Windows would recognize the drive, we had to create a partition on it, the same way you would a hard drive that had no partition on it initially.

After doing so, the i-RAM was completely functional as a regular hard drive:

The biggest difference that you notice with the i-RAM isn't necessarily its speed, but rather its sound - it's silent.  There are no moving parts, it's silent when the drive is accessed and it obviously doesn't have to spin up or down when the computer starts up.  These are all very obvious elements of the card, but they don't really sink in until you actually begin using it. 

Also, all disk accesses are instantaneous; formatting the thing takes no time at all, and you can even "defragment" it (although, you get no benefit from doing so).

With the setup done, it was time to evaluate the i-RAM as more than just a novelty silent  hard drive.  Armed with our 4GB partition, we started testing...

The Test

We ran all of our tests on the following testbed unless otherwise noted:

ASUS A8N SLI Deluxe nForce4 SLI Motherboard
AMD Athlon 64 FX-57 Processor
1 GB OCZ 2:2:2:7 DDR400 RAM
Seagate 7200.7 120 GB Hard Drive (boot drive)
Western Digital Raptor W740GD (test drive)
Gigabyte i-RAM w/ 4x1GB DDR400 modules (test drive)

We used the latest nForce 6.53 and ForceWare 77.72 drivers for our test bed, and paired it with the recently released GeForce 7800 GTX.

i-RAM’s Limitations i-RAM Pure I/O Performance
Comments Locked

133 Comments

View All Comments

  • Hacp - Monday, July 25, 2005 - link

    It could be useful for pagefile if you have a couple of old 128-256 DDR 333 or older sticks lying around, especially if your ram slots are filled with 4x 512. This can defenetly improve performance over the hard drive pagefiling, which is horrible. I wish Gigabyte would have done 8 sticks instead of 4. The benefit of 8 sticks is that it will allow users to truley use their old sticks of ram 128,256, etc instead of just 1GB sticks. Right now, the price is too high for the actual I-ram module, and also the price of ddr ram is too much. If Gigabye does this right, they could have a hit, but it does not look like they are moving in the right direction. IMO, 2x or 3x Irams with cheap 512 and 256 sticks of old ram running in a raid onfiguration would be an good solution to the hard drive bottleneck, especially if people these days are willing to pay a premium for the Raptors.

    Also, nice article Anand!
  • zhena - Monday, July 25, 2005 - link

    mattsaccount you would need 3 cards to run raid 5.

    Here is one thing that is not mentioned on anandtech in most of the storage reviews, and that is responsiveness (as i like to call it.) Back early in the day when people were starting to use raid 0, most benchmarks showed little improvement in overall system performance, even now the difference between a WD raptor and a 7200rpm drive is little in terms of overall system performance. However most benchmarks don’t reflect how responsive your computer is, it's very hard to put a number on that. When I setup raid 0 back in the day, I noticed a huge improvement while using my computer, but I am sure that the actual boot time didn't increase much. Something with the i-ram card, using it probably feels a lot snappier than using any hard drive, which is very important.
  • ss284 - Monday, July 25, 2005 - link

    Raid 0 has a higher access time than no raid. Unless you were running highly disk intensive applications the snappiness would be attributed to ram, not the harddrive.

    -Steve
  • zhena - Monday, July 25, 2005 - link

    not at all steve, the access time goes down .5ms at most (don't take my word for it i've tested it with many benchmarks) but raid 0 shines where you need to get small amounts of data fast. if you are looking for a mb of data you get it twice as fast as from a regular harddrive, (assuming around 128k raid blocks). And due to the way regular applications are written and due to locality of reference, thats where responsiveness feel comes from.
  • JarredWalton - Monday, July 25, 2005 - link

    RAID 0 would not improve access times. What you generally end up with is two HDDs with the same base access time that now have to both seek to the same area - i.e. you're looking for blocks 15230-15560, which are striped across both drives. Where RAID 0 really offers better performance is when you need access to a large amount of data quickly, i.e. reading a 200MB file from the array. If the array isn't fragmented, then RAID 0 would be nearly twice as fast, since you get both drives putting out their sequential transfer rate.

    RAID 1 can improve access times in theory (if the controller supports it) because only one of the drives needs to get to the requested data. If the controller has enough knowledge, it can tell the drive with the closer head position to get the data. Unfortunately, that level of knowledge rarely exists. You could then just have both drives try to get each piece of data, and whichever gets it first wins. Then your average rotational latency should be reduced from 1/2 a rotation to 1/4 a rotation (assuming the heads start at the same distance from the desired track). The reality is that RAID really doesn't help much other than for Redundancy and/or heavy server loads with a high-end controller.
  • Gatak - Monday, July 25, 2005 - link

    Um yes. This is what I meant - mirroring (raid1, not raid0) would increase access times as both disks could access different data independently (if the controller was smart). Sorry about the confusion.
  • ss284 - Tuesday, July 26, 2005 - link

    I was reffering to raid 0 in my post if you didnt notice. There is no way RAID-0 would lower access times. Its impossible seeing as the data is spanned accross both drives, meaning the seek would be no faster than a single drive, and likely a tiny bit slower because of overhead.
  • Gatak - Monday, July 25, 2005 - link

    RAID-0 ought to offer better random read access times as there are two disks that can read independently. Writing would be somewhat slower though as both disks need to be synced.
  • Gatak - Monday, July 25, 2005 - link

    I'd like to see some server benchmarks with this. For example:

    * mail server (especially servers using maildir is generating lots and lots of files)
    * web server
    * file server
    * database server (mysql, for example)

    Maybe some other benchmarks :D
  • mmp121 - Monday, July 25, 2005 - link


    He even states that on page 11:

    quote:

    One of the biggest advantages of the i-RAM is its random access performance, which comes into play particularly in multitasking scenarios where there are a lot of disk accesses.


    Anand, how about an update with some server / database benchies?

    Gigabyte might have something on its hands if it makes the card SATA-II to use the speed of the RAM. 1.6GB/s through a 150MB/s straw is not good. Anyhow, here's looking forward to REV 2.0 of the i-RAM GigaByte!

Log in

Don't have an account? Sign up now