i-RAM's Limitations

Since your data is stored on a volatile medium with the i-RAM, a loss of power could mean that everything stored on the card would be erased with no hopes for recovery.  While a lot of users may keep their computers on 24/7, there are always occasional power outages that would spell certain doom for i-RAM owners. In order to combat this possibility, Gigabyte outfitted the i-RAM with its own rechargeable battery pack. 

The battery pack takes 6 hours to charge completely and charges using the 3.3V power lines on its PCI connector.  With a full charge, the i-RAM is supposed to be able to keep the i-RAM's data safe for up to 16 hours.   Luckily, in most situations, the i-RAM will simply keep itself powered from the PCI slot.  As long as your power supply is still plugged in and turned on, regardless of whether or not your system is running, shut down or in standby mode, the i-RAM will still be powered by the 3.3V line feeding it from the PCI slot. 

There are only three conditions where the i-RAM runs off of battery power:
1) When the i-RAM is unplugged from the PCI slot;
2) When the power cable is unplugged from your power supply (or the power supply is disconnected from your motherboard; and
3) When the power button on your power supply is turned off.
For whatever reason, unplugging the i-RAM from the PCI slot causes its power consumption to go up considerably, and will actually drain its battery a lot quicker than the specified 16 hours.  We originally did this to test how long the i-RAM would last on battery power, but then were later told by Gigabyte not to do this because it puts the i-RAM in a state of accelerated battery consumption.

For the most part, the i-RAM will always be powered.  Your data is only at risk if you have a long-term power outage or you physically remove the i-RAM card. 

If you run out of battery power, you will lose all data and the i-RAM will stop appearing as a drive letter in Windows as soon as you power it back up.  You'll have to re-create the partition data and copy/install all of your files and programs over again. 

The card features four LEDs that indicate its status: PHY_READY, HD_LED, Full and Charging. 

The PHY_READY indicator simply lets you know if the Xilinx FPGA and the card are working properly.  The HD_LED is an activity indicator that is illuminated whenever you access the i-RAM.  The Full indicator turns green when the battery is fully charged, and the Charging indicator is lit amber when the battery is charging.  When the i-RAM is running on battery power, none of the LEDs are illuminated.  It would be nice if there was some way of knowing how much battery power is remaining on the i-RAM, for those rare situations where the i-RAM isn't being charged.  We have asked Gigabyte for some sort of battery life indicator in a future version of the i-RAM.

We All Scream for i-RAM Using the i-RAM
Comments Locked

133 Comments

View All Comments

  • Hacp - Monday, July 25, 2005 - link

    It could be useful for pagefile if you have a couple of old 128-256 DDR 333 or older sticks lying around, especially if your ram slots are filled with 4x 512. This can defenetly improve performance over the hard drive pagefiling, which is horrible. I wish Gigabyte would have done 8 sticks instead of 4. The benefit of 8 sticks is that it will allow users to truley use their old sticks of ram 128,256, etc instead of just 1GB sticks. Right now, the price is too high for the actual I-ram module, and also the price of ddr ram is too much. If Gigabye does this right, they could have a hit, but it does not look like they are moving in the right direction. IMO, 2x or 3x Irams with cheap 512 and 256 sticks of old ram running in a raid onfiguration would be an good solution to the hard drive bottleneck, especially if people these days are willing to pay a premium for the Raptors.

    Also, nice article Anand!
  • zhena - Monday, July 25, 2005 - link

    mattsaccount you would need 3 cards to run raid 5.

    Here is one thing that is not mentioned on anandtech in most of the storage reviews, and that is responsiveness (as i like to call it.) Back early in the day when people were starting to use raid 0, most benchmarks showed little improvement in overall system performance, even now the difference between a WD raptor and a 7200rpm drive is little in terms of overall system performance. However most benchmarks don’t reflect how responsive your computer is, it's very hard to put a number on that. When I setup raid 0 back in the day, I noticed a huge improvement while using my computer, but I am sure that the actual boot time didn't increase much. Something with the i-ram card, using it probably feels a lot snappier than using any hard drive, which is very important.
  • ss284 - Monday, July 25, 2005 - link

    Raid 0 has a higher access time than no raid. Unless you were running highly disk intensive applications the snappiness would be attributed to ram, not the harddrive.

    -Steve
  • zhena - Monday, July 25, 2005 - link

    not at all steve, the access time goes down .5ms at most (don't take my word for it i've tested it with many benchmarks) but raid 0 shines where you need to get small amounts of data fast. if you are looking for a mb of data you get it twice as fast as from a regular harddrive, (assuming around 128k raid blocks). And due to the way regular applications are written and due to locality of reference, thats where responsiveness feel comes from.
  • JarredWalton - Monday, July 25, 2005 - link

    RAID 0 would not improve access times. What you generally end up with is two HDDs with the same base access time that now have to both seek to the same area - i.e. you're looking for blocks 15230-15560, which are striped across both drives. Where RAID 0 really offers better performance is when you need access to a large amount of data quickly, i.e. reading a 200MB file from the array. If the array isn't fragmented, then RAID 0 would be nearly twice as fast, since you get both drives putting out their sequential transfer rate.

    RAID 1 can improve access times in theory (if the controller supports it) because only one of the drives needs to get to the requested data. If the controller has enough knowledge, it can tell the drive with the closer head position to get the data. Unfortunately, that level of knowledge rarely exists. You could then just have both drives try to get each piece of data, and whichever gets it first wins. Then your average rotational latency should be reduced from 1/2 a rotation to 1/4 a rotation (assuming the heads start at the same distance from the desired track). The reality is that RAID really doesn't help much other than for Redundancy and/or heavy server loads with a high-end controller.
  • Gatak - Monday, July 25, 2005 - link

    Um yes. This is what I meant - mirroring (raid1, not raid0) would increase access times as both disks could access different data independently (if the controller was smart). Sorry about the confusion.
  • ss284 - Tuesday, July 26, 2005 - link

    I was reffering to raid 0 in my post if you didnt notice. There is no way RAID-0 would lower access times. Its impossible seeing as the data is spanned accross both drives, meaning the seek would be no faster than a single drive, and likely a tiny bit slower because of overhead.
  • Gatak - Monday, July 25, 2005 - link

    RAID-0 ought to offer better random read access times as there are two disks that can read independently. Writing would be somewhat slower though as both disks need to be synced.
  • Gatak - Monday, July 25, 2005 - link

    I'd like to see some server benchmarks with this. For example:

    * mail server (especially servers using maildir is generating lots and lots of files)
    * web server
    * file server
    * database server (mysql, for example)

    Maybe some other benchmarks :D
  • mmp121 - Monday, July 25, 2005 - link


    He even states that on page 11:

    quote:

    One of the biggest advantages of the i-RAM is its random access performance, which comes into play particularly in multitasking scenarios where there are a lot of disk accesses.


    Anand, how about an update with some server / database benchies?

    Gigabyte might have something on its hands if it makes the card SATA-II to use the speed of the RAM. 1.6GB/s through a 150MB/s straw is not good. Anyhow, here's looking forward to REV 2.0 of the i-RAM GigaByte!

Log in

Don't have an account? Sign up now