Hammer Z-Box

The Hammer is a unique option in this space and is vastly different than the other devices in this article. The Hammer is essentially an entry level IP SAN, where a device driver on the client writes to the disk using block-level I/O instead of file-level I/O. One of the main differences between the Hammer and a traditional IP SAN is that the Hammer uses its own proprietary software driver vs. the iSCSI specification. We at many points in the article pondered whether this device fits in, and determined that it does due to its price point and the fact that it is still marketed as a NAS competitor.

How it works

The Hammer uses a software based initiator which handles all the I/O communication with the device and transmits it over UDP. As we alluded to above, the I/O commands are not simple file level requests; they are actually block-level. Some caveats to this approach are that it requires a fair amount of processing power on the client, and each client will require the software driver.

The Z-Box supports four types of RAID: 0, 1, 0+1, and 10. One item to note is that while the chosen RAID configuration is stored on the box, the client software handles the actual data "RAIDing", not the device itself. Unlike most NAS devices, the Z-Box is expandable. Each disk in the array gets an IP address, and you can daisy chain multiple devices together and create stripes and mirrors across any group of disks you like. For the average home user, this technology is not very useful. But, to a small or medium business looking for relatively cheap IP based storage this is an attractive option.

The Hammer comes with a single gigabit Ethernet port on the rear of the device and four hot-swap SATA drive bays The file system is either Z-FS (Zetera File System) or NTFS, so in case you haven't noticed this device is windows only. Z-FS is a custom file system that supports multi-user access and is licensed from DataPlow.

The chassis itself is quite small and has a rugged industrial look to it. Dimensions for the unit are 162mm (W) x 190mm (H) x 325mm (D), and it weighs in at 9Kg. The front panel consists of a few LEDs: one for each disk and one power. The rear of the unit houses one gigabit network port and the power supply.


Rear

Front
Click to enlarge

Admin Interface

The management interface for the Hammer is client-side, since the device is just a target housing disks. We were somewhat unimpressed by the client-side administration of the device. Using the management software was somewhat cumbersome, especially creating volumes. When creating a volume the end user has to calculate how large the volume should be and how much space from each disk to use, which is far from intuitive.

Pros
  • Very fast
  • Small footprint
  • Very quiet
Cons
  • High CPU usage on the client
  • Supposed to compete with a NAS, but lacks a lot of functionality found in the various NAS products like media streaming.
Retail (ZipZoomFly.com): $1100

Feature Comparison LaCie Ethernet Disk 1TB
Comments Locked

23 Comments

View All Comments

  • LoneWolf15 - Tuesday, December 5, 2006 - link

    Did Anandtech actually test Active Directory support with these test units?

    I ask, because after setting up multiple Buffalo Terastation Pros, it isn't an easy task. In our case, our units were shipped with a firmware that isn't even offered for download (1.2) that had multiple issues. I had to call Buffalo (btw, on-hold times are forever with this company) who told me I had to back down the firmware to the version on their website. I did this, and still had glitches, so they sent me a beta firmware (1.4) which fixed those, but requires using IP addresses (UNC pathnames are not currently supported).

    That was this summer. The Buffalo tech indicated a 1.5 firmware in testing that would be released this fall; that time has come and gone. The units work for what we need, but I'm far from impressed with their support, and would encourage Anandtech to make sure things like Active Directory support actually work as advertised.
  • smalenfant - Tuesday, December 5, 2006 - link

    I would have like to see the power consumption by these devices. Currently I have a mythbackend server (Duron 1.6Ghz) that I installed 4 disk in it (no array). I couldn't justify buying a NAS and let it sit there powered up all day. My server uses about 110W (when all disk are spinning and ATSC capture card running). I took out my NSLU2 because it was so slow but that doesn't compare here.
  • arswihart - Tuesday, December 5, 2006 - link

    I don't see spin-down mentioned on any of these units. This is a big contributor to disk life, as a lot of wear can be saved if they are spun down when not being used. This may be a non-issue for some, but if you are just using them all day long and not at all at night, you are already saving 50% of the time until failure, and possibly doubling your disk's lifespan.
  • Deanodxb - Wednesday, December 6, 2006 - link

    The latest firmware for the Ready Nas NVs supports disc spin down.
  • yyrkoon - Tuesday, December 5, 2006 - link

    You know, the Thecus model looks like a rip-off of Mashies' UDAT mod ( www.mashie.org ).

    Anyhow, these results are very disapointing. You can purchase a 4 disk enclosure from addonics, that will ouput 4 ATA drives on one USB connection, that will perform very simular to these "superior" products (and cost a hell of a lot less, $150usd, minus drives). Granted, the only RAID option with the addonics 4x ATA-> USB controller is JOBD. I think most of these manufactuers could have saved themselves some money (thus passing it on to the customer) by using older technology equipment in their systems. Also, I'm still trying to figure out WHY the Hammer-x system just didnt opt for a Linux iSCSI target configuration, since atleast Vista Ultimate will ship with MS' initator client (atleast judging by the RC2 5744 build), and the MS initiator is also free from MS currently for XP.

    I guess the only way the home enthusiast, such as probably most of the people who read these comments, would most likely be better off suited buying their own hardware, and putting it together. So much for having high hopes eh ?

    However, I still have high hopes for http://www.accusys.com.tw/eng/products_deskraid_77...">This product. It's not availible yet, but I've been in contact with the company through email, and it soudns as though they are finishing up on the firmware, and are close to production phase.
  • yyrkoon - Tuesday, December 5, 2006 - link

    err . . .

    quote:

    I guess the only way the home enthusiast, such as probably most of the people who read these comments, would most likely be better off suited buying their own hardware, and putting it together.


    What I was TRYING to say was: "The home enthusiast concerned about performance, would be better off building their own"

  • Deanodxb - Tuesday, December 5, 2006 - link

    I've looked at this too. What I'm going to do is put a small system together with an Areca controller, Addonics 4XSA drive bays and an Open E XSR SMB Nas system (basically a NAS specific OS which comes on a compact flash card which plugs straight into the IDE controller on your Mobo). You don't need a very powerful CPU for this, just a mobo with onboard vid and preferably Gigabit NIC and PCI-E for the controller. Open E supports the Areca controllers (as well as many others). Check out http://www.open-e.com/nasxsr/network_attached_stor...">Open-E

    Addonics also offer solutions now similar to the Accusys set up. Get an eSATA card and one of the storage tower units with a 5 x 1 SATA port multiplier. http://www.addonics.com/products/raid_system/ast4....">Addonics Storage TowerIf you want to run any RAID configuration on this though it will eat up CPU cycles as the eSATA card doesn't have a dedicated hardware RAID processor, unlike the Areca.

    I currently run a mix of an 8 port Areca (3.5 TB) plus 1 (working) Ready NAS NV (1TB) and Addonics eSATA removable/mobile rack with a bunch of cartridges (250Gb - 750Gb drives). This works well. And before you ask, I use this set up to store HD movies...

    ...p0rn is burnt to DVD ;)
  • yyrkoon - Tuesday, December 5, 2006 - link

    Yeah I've know about Open-E for some time, and I wouldnt even consider their product, as it is too expencive for what it is. It wouldnt be that hard for someone such as myself who knows a good bit about Linux already to make their own Linux iSCSI target even. This isnt to say I know it all, I dont, however, thats is what's so great about the internet, and choosing the right distro(there are a lot of people who have already done it, and have documented what they've done)

    As for Addonics having something simular to the Accusys system, I do not know 100%, but I'm thinking this is incorrect. The Accusys system requires no host driver (well, partially, obviously the host would need eSATA drivers for the eSATA connectivity), uses 0% host CPU, and all RAID functions are done inside the enclosure. A port multiplier using current technology requires: A host with either a HBA, or onboard SIL3132(or equivelent) chipset, SIL RAID utilities on the host, and of course eSATA connectivity. Either way, using a SATA port multiplier *would* be cheaper, but at the cost of at least a little performance, and the Accusys solution at current is only SATA(192MB/s max), not SATAII(384MB/s max).

    Anyhow, all this being said, and I'm still not sure exactly what I want/need. I mean do I *really* need a storage system that is capable of 384MB/s(potentialy)? Would it really make all that much of a difference if the RAID was handled by the host vs the enclosure ? Which solution would be cost efficient, more so than the other ? Lastly, can I even afford either solution ?

    I do know that I dont like to wait when transfering files, and I do need something that will hold large amounts of data, and be reliable. *right now*, all my stored data is on USB enclosed HDDs, and seems to be working *ok*, but what I would really like, is a system that holds 4 or more disks, and uses removable cartidges, so that I do not have buy another system when i run out of space, but rather, just buy a new cartridge, and HDD, and pop it in. Personally, I still have a lot of thought to put into what I'm going to buy, and I've already been thinking about it for a long time.
  • Deanodxb - Tuesday, December 5, 2006 - link

    Nice article, I must however share my experience with the Ready NAS NV units.

    I bought 2 of these earlier this year. One works fine (250gb drives) although it is VERY slow at reads and writes, even with a Gigabit switched connection (all hardware approved/recommended by Infrant).

    The other is a complete and utter lemon. It keeps dropping the network connection after 20 seconds or so (this seems to be a fairly common fault - see the infrant forum) and has so far killed 3 320Gb drives, all new, from different batches. I had two discs fail on me in this unit, one shortly after the other, and I lost around 800Gb of data. It is now a very expensive doorstop (I live in Dubai, it will cost me more to ship it back to the US than the unit is worth). Whilst the units look sturdy, in real world usage they are anything but that.

    These units are VERY hit and miss. I would not recommend Ready NAS NV units to anyone who cares about their data and fast access to it. Caveat emptor.

    I would suggest going with an Areca RAID card instead. I did and I am much happier.
  • dillytaint - Wednesday, December 6, 2006 - link

    I bought an NV cause it was linux based and would save me some time since I could just plug it in and be done. I knew it wasn't blindly fast but the performance really is terrible. Rebuild/init times are terrible averaging ~5 hours for 4 320gb drives for me. Performance is especially bad with small files if you have your Maildir on it for example. Jumbo frames only work in one direction and NFS only works over UDP. I had problems with CIFS/NFS user permissions and UIDs since the UIDs I use on my machines were in a restricted range. I had trouble with good drives being reported as bad always in the same slot and was constantly rebuilding. On large file transfers it would hang and would require a reset to bring it back online. The 256mb of memory is only expandable by small list of supported SO-DIMMs and is non-ECC. Some of these issues may be fixed now but I was not willing to wait and beta test.

    In the end I returned it and built my own linux box with the same drives and it's been rock solid. It cost me $200 more in hardware and I got 2gb of ECC RAM, a workstation class motherboard, a dualcore CPU, and 6 SATA II connectors. And I can run anything else I want to on the box. I get 48MB/s sequential writes (even after filling the buffer cache) over gigabit with jumbo frames and 60MB/s sequential reads. I am using LVM2 and reiserfs so I have a lot of flexability with how I use my space, and can also export space as iSCSI targets.

    If you have experience with such things resist the urge to get one of these boxes to save time. You'll likely end up saving more time doing it yourself and end up with more reliability and better performance.

Log in

Don't have an account? Sign up now