Final Words

In our first high level look, Intel's SSD DC S3500 looks to be everything we loved about the S3700 but at a more affordable price point for enterprise customers who don't need insane amounts of write endurance. With SSDs in the enterprise there's this tendency to over estimate endurance needs. I was guilty of it in the design of our SSD requirements for serving AnandTech. Part of the problem is there aren't tons of good software applications to quietly monitor/report/analyze enterprise workload behavior. I suspect that for the vast majority of use cases however, the S3500 is likely more than enough. Even though the AnandTech database servers (content, stats tracking and forums) are fairly write intensive, the S3500 is actually the right target for us - the S3700 would deliver far more endurance than we'd ever use.

Other than write endurance, the only other thing you give up is random write performance. Intel's specs list the S3500 at roughly 1/3 of the sustained 4KB random write performance of the S3700, which I saw in our numbers as well. Given the lower price point however, most customers are likely comparing performance to an array of hard drives. On an individual level, a good high-end HDD will provide somewhere around 1 - 2MB/s in 4KB random write performance. The S3500 by comparison is good for about 40MB/s. Intel's own data shows that 12 S3500s will deliver roughly the same random IO as 500 15K RPM hard drives. Based on the data I've seen, that comparison is pretty accurate.

All of the other S3700 benefits remain. Performance consistency is excellent, which makes the S3500 ideal for use in many-drive RAID arrays. Intel's enterprise drives have typically done very well in terms of reliability as well (and I haven't heard any complaints about the S3700), making the S3500 a safe bet. My only real complaint about this drive is the idle power rating is too high for notebook use, because otherwise I'd even suggest looking at the S3500 for consumer use as well.

All in all I've been pleased with Intel's work in the enterprise SSD space. Most interesting to me is just how aggressive Intel has been in terms of enterprise SSD pricing. The S3500 shows up at well under $1.50/GB, and consumer drives aren't that far off in terms of pricing. Intel typically doesn't push this aggressively for lower prices with its enterprise products, so when it happens I'm very happy.

 

Random & Sequential IO Performance
Comments Locked

54 Comments

View All Comments

  • ShieTar - Wednesday, June 12, 2013 - link

    I think the metric is supposed to show that you need a dedicated drive per VM with mechanical HDDs, but that one of these SSDs can support and not slow down 12 VMs by itself. Having 12 VMs access the same physical HDD can drive access times into not-funny territory.
    The 20GB per VM can be enough if have a specific kernel and very little software. Think about a "dedicated" Web-Server. Granted, the comparison assumes a quiet specific usage scenario, but knowing Intel they probably did go out and retrieve that scenario from an actual commercial user. So it is a valid comparison for somebody, if maybe not the most convincing one to a broad audience.
  • Death666Angel - Wednesday, June 12, 2013 - link

    Read the conclusion page. That just refers to the fact that those 2 setups have the same random IO performance. Nothing more, nothing less.
  • FunBunny2 - Wednesday, June 12, 2013 - link

    Well, there's that other vector to consider: if you're enamoured of sequential VSAM type applications, then you'd need all that HDD footprint. OTOH, if you're into 3NF RDBMS, you'd need substantially less. So, SSD reduces footprint and speeds up the access you do. Kind of a win-win.
  • jimhsu - Wednesday, June 12, 2013 - link

    Firstly, the 500 SAS drives are almost certainly short-stroked (otherwise, how do you sustain 200 IOPS, even on 15K drives). That cuts capacity by 2x at least. Secondly, the large majority of web service/database/enterprise apps are IO-limited, not storage-limited, hence all that TB is basically worthless if you can't get data in and out fast enough. For certain applications though (I'm thinking image/video storage for one), obviously you'd use a HDD array. But their comparison metric is valid.
  • rs2 - Wednesday, June 12, 2013 - link

    That doesn't mean it's not also confusing. The primary purpose of a "SW SAN Solution" is storage, not IOPS, so one SAN is not comparable to another SAN unless they both offer the same storage capacity.

    In the specific use-case of virtualization, IOPS are generally more important than storage space. But if IOPS are what they want to compare across solutions is IOPS performance then they shouldn't label either column a "SAN".

    So yes, on the one hand it's valid, but on the other it's definitely presented in a confusing way.
  • thomas-hrb - Wednesday, June 12, 2013 - link

    It is a typical example of a vendor highlighting the statistics they want to you remember, and ignoring the ones that they hope are not important. That is the reason why technical people exist. Any fool can read and present excellent arguments for one side or the other. It is the understanding of these parameters, what they actually mean in a real world usage scenario that is the bread and butter of our industry. I don't know if this is typical for most modern SAN's. I am using a IBM v7000 (very popular SAN for IBM). But the v7000 comes with Auto Teiring which moves "hot blocks" from normal HDD Storage to SSD, thus having a solid performing random IO SSD that is consistent is essential to how this type of SAN works.
  • Jaybus - Monday, June 17, 2013 - link

    Well, but but look at it another way. You can put 120 SSDs in 20U and have 200 GB per VM using half the rack space and a tenth the power but with FAR higher performance, and for less cost.

    Also, the ongoing cost of power and rack space is more important. In the same 42U space you can have a 252 SSD SAN (201,600 GB) and still use less than a fifth the power and have far, far greater performance.
  • thomas-hrb - Wednesday, June 12, 2013 - link

    They are comparing IOP's. There are a few use cases where having large amounts of storage is the main target (databases, mailbos datastores etc), but typically application servers are less than 20GB in size. Even web-servers will typically be less than 10GB (nix based) in size. Ultimately any storage system will have a blend of both technologies and have a teir'd setup where they have Traditional HDD's to cover their capacity and somewhere between 5-7% of that capacity as high performance SSD's to cover for the small subset of data blocks that are "hot" and require significant'y more IOP's. This new SSD simply gives storage professionals an added level of flexibility in their designs.
  • androticus - Wednesday, June 12, 2013 - link

    Why is "performance consistency" supposed to be so good... when the *lowest* performance number of the Seagate 600 is about the same as the *consistent* number for Intel? The *average* of the Seagate looks much higher? I could see this as an advantage if the competitor numbers also went way below Intel's consistent number, but not in this case.
  • Lepton87 - Wednesday, June 12, 2013 - link

    Compared to Seagate random write performance this doesn't look unlike a GF that delivers almost constant 60fps compared to a card that delivers 60-500fps, so what's the big deal? Cap the performance at whatever level Intel SSD delivers and you will have the same consistency, but what's the point? It only matters if the drives deliver comparable performance but one is a roller-coaster and the second is very consistent which is not the case is this comparison. Allocate more spare area to the Seagate, even 25% and it will mop the floor with this drive and price per GB will be still FAR lower. Very unimpressed with this drive, but because it's an Intel product we are talking about on Anandtech it's lauded and praised like there's no tomorrow.

Log in

Don't have an account? Sign up now