Back to Article

  • voicequal - Saturday, July 12, 2014 - link

    Why isn't the RAID 1 read performance closer to the RAID 0 read? Can't data be read from both drives in RAID 1? Reply
  • PEJUman - Saturday, July 12, 2014 - link

    While in general I agree with your sentiment, I thought about this question before and one possible answer I came up with was to save the wear and tear on the 2nd drive. i.e. it only uses the 2nd drive when the 1st one have too much ECC.

    This approach matches well with the raid 1 goal of ultimate redundancy.

    Ultimately, I wish more controller would expose the finer details on Raid tuning such as this option
  • madmilk - Sunday, July 13, 2014 - link

    Not for sequential reads, because RAID 1 isn't striped. On RAID 0 you can read alternating stripes from each drive sequentially, but with RAID 1 you'd be reading the data twice.

    The random read scores are much closer between the two.
  • voicequal - Sunday, July 13, 2014 - link

    I see your point that the reads won't be 100% sequential as seen by the drive heads, but if drive 1 starts reading at X and drive 2 at X+128KB, you can effectively get twice the read throughput over 256KB. Then you have to move the drive heads +128KB which does incur a performance cost.

    Still with a sufficiently large read block size, I would think there could be a substantial performance improvement reading from both drives in RAID 1. Does anyone know a RAID1 HW or SW controller that can do this?
  • DanNeely - Sunday, July 13, 2014 - link

    The time spent skipping ahead is equal to the the time spend reading the area being skipped in a non-fragmented file. To double read speeds in a "mirrored" drive you'd need to have either the array controller or the driver in a software array store the file sectors as 02481357... on the first drive and 13570248... so that when reading the file the two drives are reading sequential sectors on the drive and alternating chunks of data in the file. Reply
  • Cerb - Sunday, July 13, 2014 - link

    No, you wouldn't. You'd just need to alternate drives for reads, keeping them balanced, so that a total QD of say, 6 would be QD=2-4 on one drive, and QD=2-4 on the other. Where the file data actually gets stored shouldn't matter, only how the RAID implementation decides to read it. If the reads are sufficiently sequential, both drives should be able to stay quite busy, and get read performance around that of RAID 0.

    Most likely is that they didn't bother even trying that, as RAID 1 is not generally used for performance anyway.
  • voicequal - Sunday, July 13, 2014 - link

    Your approach would make sequential reads quite fast, but at the expense of sequential writes which would be split across different areas of the drive. Reply
  • xfortis - Sunday, July 13, 2014 - link

    This is a good question. I assume that most drives are set up in their controllers to present data sequentially from the beginning. I don't think it's very common that any type of program would ask a storage device for the second-half of a given file (at least not without having read the first half); I would think that the drive wouldn't have the capability within itself to address data beginning at an arbitrary point in a sequence of data - it always has to start at the beginning of the data(?).

    I think to implement this you would need to segment your data at the storage/RAID controller level, like striping but each drive has all the stripes in a RAID 1. Then at the controller level the controller would be able to take a request for data and, assuming the requested data spans at least two segments, it can produce two or more starting-addresses for the drives to read. But then your segment-size would have to be tuned to the kind of data you have (like allocation units) and also then there would be an additional level of addressing abstraction/complexity that would make any kind of data-recovery very difficult.

    Everything I just said may be wrong. I'm just making assumptions and inferences because it's fun. Let's get a volunteer who has more knowledge or feels like trawling wikipedia for a while!
  • voicequal - Sunday, July 13, 2014 - link

    Yes, I'm thinking this would be best done at the controller level. I've seen operating systems apply their own striping of sorts at the filesystem (i.e. NTFS) level. Try writing two large files simultaneously to the same hard drive. On an OS like Windows 8, the throughput is surprisingly good. This can only be achieved if the OS is smart enough to use a reasonably large "chunk" size for writing the file fragments to the disk. In this way the disk sees mostly sequential write activity despite the two concurrent write operations, while the number of file fragments tracked by the filesystem is minimized. Reply
  • TerdFerguson - Saturday, July 12, 2014 - link

    If it can't connect directly to a router and it can't host a Plex server, I'm not interested. Reply
  • PEJUman - Saturday, July 12, 2014 - link

    it's also only $100 if you factor the 2 x 4TB reds in it worth $350. Reply
  • fteoath64 - Sunday, July 13, 2014 - link

    If you put it that way, then $100 for the enclosure, PSU and controller board would be reasonable, so it is a good buy if a DAS suits your needs using USB3 only interfaces with the added value of a hub tossed in as extra!. Reply
  • fteoath64 - Sunday, July 13, 2014 - link

    Clearly this is a DAS as opposed to a NAS that you would like to expect. Totally different kettle-of-fish!. Reply
  • Cerb - Sunday, July 13, 2014 - link

    Um, OK. Is there any reason why it can't connect to your router or Plex server? While the review is a little ambiguous, there's no mention of needing added OS-specific drivers just to see the drives, so it *may* work with [most USB UMC enabled] routers just fine. Reply
  • Zak - Tuesday, July 22, 2014 - link

    OK Reply
  • darwinosx - Sunday, July 13, 2014 - link

    You couldn't test it on a Mac too? With all the Apple articles Anandtech does? I'd like to know about the Mac software and performance. Reply
  • name99 - Sunday, July 13, 2014 - link

    What problem do you want to solve on a Mac?
    This will give you a single glob of 8GB storage with minimal config, but you're paying for that convenience. That's fine, but there are cheaper and/or higher performing alternatives.

    If you're willing to do just a little config, for the same sort of price you could buy
    - a USB3 hub
    - a 256GB external USB3 SSD
    - two USB3 4TB hard drives
    You could then use Apple SW RAID to stripe the HDs together, and use CoreStorage (using the commandline diskutil command) to fuse the SSD to the striped RAID. What you'd have will give you the performance of this box for throughput, but with the zippiness of SSDs for the random access. I have a system like this (although put together from substantially older equipment --- an old 64GB SSD and two REALLY old 300GB HDs) and it works astonishingly well given the age of the equipment, especially the HDs.
  • darwinosx - Tuesday, August 05, 2014 - link

    I am already using two USB 3 drives and carbon copy cloner. I want a more minimal solution. Interesting solution with SSD but I don't need speed for a backup solution. Reply
  • DanNeely - Sunday, July 13, 2014 - link

    AT authors work remotely (and live all over the world) so there isn't a single shared testbed, nor can they easily loan hardware back and forth for testing. Since Apple doesn't donate hardware to build testbeds, the only authors who have Apple devices to test with are those who've bought Apple computers with their own money for personal use. Reply
  • darwinosx - Tuesday, August 05, 2014 - link

    Anandtech has plenty of Macs availalbe which is really obvious. Reply
  • npz - Monday, July 14, 2014 - link

    If the layout (stride and chunk size) is optimal, it would be close to RAID-0 for multithreaded workloads, since both disks can be read independently. Actually even single threaded/single-process workloads can benefit if the program is using asynch/queued IO.... but this *IF* the DAS unit were using linux (mdadm) or software raid.

    However it's not, it's using a very simple Jmicron controller to do the mirroring.
  • BillT2014 - Monday, July 14, 2014 - link

    I've never before heard that a raid1 component drive might not be universally readable. All the controller is supposed to do is make the two drives identical. A single component of the RAID should be readable by any other system that supports that filesystem. Reply
  • jamyryals - Monday, July 14, 2014 - link

    Would this, or something similar from another manufacturer, work connection to a USB 2.0 port? I would like to use this connected to an older machine, but I don't really want to add a USB 3.0 adapter card. If the device slowed to USB 2.0 speeds that would be fine. Reply
  • celestialgrave - Tuesday, July 15, 2014 - link

    How hot did the drives get? Did the fan ever have to spin up to full speed? How would you characterize the fan noise? Reply
  • BillT2014 - Wednesday, March 25, 2015 - link

    Still waiting for an explanation of this sentence:

    " Inserting the removed disk into a PC's SATA slot didn't show the stored data (as expected, since this is hardware RAID)."

    This defies all sense. RAID is RAID whether it is software or hardware. Maybe the reason the drive wasn't readable is because it was a RAID 0 component? That would never be readable as such under any circumstances. But a RAID 1 drive should always be readable.

  • BillT2014 - Friday, March 27, 2015 - link

    The final sentence also defies everything we know about RAID:

    "Potential areas of improvement, however, include support for hot-swapping drives and provision for data recovery from a RAID 1-member drive directly connected to a PC."

    The reviewer ought to know that if a RAID 1-member drive, directly connected to a PC, is not recoverable, then the problem is the reviewer, not the drive.

    One might wonder if the reviewer was unwittingly testing a RAID 0 member?

    These issues should be addressed and the review should be corrected. It's amazing that it has stood so long like this.

Log in

Don't have an account? Sign up now