Hard Disk Performance: HD Tune






The Hitachi Deskstar 7K1000 has the second highest overall sustained transfer rates of the three drives listed. The sustained transfer rate is nipping on the heels of the WD1500AHFD in this test while the maximum transfer rate is slightly ahead and the minimum results are about 17% slower. Our first screenshot is the Hitachi drive with Automatic Acoustic Management and NCQ turned on. The second screenshot has both features turned off. We also tested with AAM off and NCQ on with the burst rate results mirroring the first screenshot and the access time mirroring the second screenshot. This means that NCQ being turned off is what affected burst transfer rates and AAM being on increased the access times in these synthetic tests.

We did not expect this as previous test results with several drives showed that AAM usually caused a performance penalty in both transfer and access time rates. In our application tests we found that enabling AAM usually did not alter the test results more than 1% and at times the scores were even or slightly better (other than the lower access times). Even though the 7K1000 drive has excellent sustained transfer rates we will soon see this does not always translate into class leading performance.

Hard Disk Performance: HD Tach







Click to enlarge

We are also including HD Tach results for each drive. Once again the first screenshot has the 7K1000 test score with AAM and NCQ turned on while the second screenshot is with both options turned off. Our tests with AAM turned off and NCQ on resulted in scores nearly identical to the HD Tune results indicating once again that AAM does not inflict a noticeable performance penalty on this drive. The balance of the performance results between our test samples basically mirrors those of our HD Tune scores.

Test Setup Acoustics and Thermals
Comments Locked

74 Comments

View All Comments

  • Gary Key - Monday, March 19, 2007 - link

    It has worked well for us to date. We also took readings with several other programs and a thermal probe. All readings were similar so we trust it at this time. I understand your concern as the sensors have not always been accurate.
  • mkruer - Monday, March 19, 2007 - link

    I hate this decimal Byte rating they use. They say the capacity is 1 TeraByte meaning 1,000,000,000,000 Bytes, this actually translates into ~930GB or .93TB that the OS will see using the more commonly used (base 2) metric. This is the metric that people assume you are talking about. When will the drive manufactures get with the picture and list the standard Byte capacity?
  • Spoelie - Tuesday, March 20, 2007 - link

    I don't think it matters all that much, once you heard it you know it. There's not even a competitive marketing advantage or any scamming going on since ALL the drive manufacturers use it and in marketing material there's always a note somewhere explaining 1GB = blablabla bytes. So 160GB on one drive = 160GB on another drive. That it's not the formatted capacity has been made clear for years now, so I think most people who it matters for know.
  • Zoomer - Wednesday, March 21, 2007 - link

    IBM used to not do this. Their advertised 120GB drive was actually 123.xxGB, where the GB referred to the decimal giga. This made useable capacity a little over 120GB. :)
  • JarredWalton - Monday, March 19, 2007 - link

    See above, as well as http://en.wikipedia.org/wiki/SI_prefix">SI prefix overview and http://en.wikipedia.org/wiki/Binary_prefix">binary prefix overview for details. It's telling that this came into being in 1998, at which time there was a class action lawsuit occurring I believe.

    Of course, you can blame the computer industry for just "approximating" way back when KB and MB were first introduced to be 1024 and 1048576 bytes. It probably would have been best if they had created new prefixes rather than cloning the SI prefixes and altering their meaning.

    It's all academic at this point, and we just try to present the actual result for people so that they understand what is truly meant (i.e. the "Formatted Capacity").
  • Olaf van der Spek - Monday, March 19, 2007 - link

    quote:

    Hitachi Global Storage Technologies announced right before CES 2007 they would be shipping a new 1TB (1024GB) hard disk drive in Q1 of this year at an extremely competitive price of $399 or just about 40 cents per GB of storage.


    The screenshot shows only 1 x 10 ^ 12 bytes. :(

    And I'm wondering, do you know about any plans for 2.5" desktop drives (meaning, not more expensive than cheapest 3.5" drives and better access time)?
  • crimson117 - Monday, March 19, 2007 - link

    How many bytes does this drive actually hold? Is it 1,000,000,000,000 bytes or 1,099,511,627,776 bytes?


    It's interesting... it used to not seem like a huge difference, but now that we're approaching such high capacities, it's almost a 100 GB difference - more than most laptop hard disks!
  • crimson117 - Monday, March 19, 2007 - link

    I should learn to read: Operating System Stated Capacity: 931.5 GB
  • JarredWalton - Monday, March 19, 2007 - link

    Of course, the standard people decided (AFTER the fact) that we should now use GiB and MiB and TiB for multiples of 1024 (2^10). Most of us grew up thinking 1KB = 1024B, 1MB = 1024KB, etc. I would say the redefinition was in a large part to prevent future class action lawsuits (i.e. I could see storage companies lobbying SI to create a "new" definition). Windows of course continues to use the older standard.

    Long story short, multiples of 1000 are used for referring to bandwidth and - according to the storage sector - storage capacity. Multiples of 1024 are used for memory capacity and - according to most software companies - storage capacity. SI sides with the storage people on the use of mibibytes, gibibytes, etc.
  • mino - Tuesday, March 20, 2007 - link

    Ehm, ehm.
    GB was ALWAYS spelled Giga-Byte and Giga- with short "G" is a standard prefix for 10^9 since the 19th century(maybe longer).

    The one who screwed up were the software guys whoe just ignored the fact 1024!=1000 and used the same prefix with different meaning.

    SI for long ignored this stupidity.
    Lately SI guys realized software guys are too careless to accept the reality that 1024 really does not equal 1000.

    It is far better to have some standard way to define 1024-multiples and have many people use old wrong prefixes than to have no such definition at all.

    I remember clearly how confused I was back in my 8th grade on Informatics class when teacher tried(and failed back then) to explain why everywhere SI prefixes mean 10^x but in computers they mean 2^10 aka 1024.
    IT took me some 4 years until I was comfortable with power-of-something nubers enough so that it did not matter whether one said 512 or 2^9 to me.

    This prefix issue is a mess SI did not create nor caused. They are just trying to clean it up in the single possible way.

Log in

Don't have an account? Sign up now