Finding a Good Fit

The previous benchmarks have shown that the first Calxeda server is not for the general IT market. As the slide below shows, Calxeda targets four kinds of workloads:

  • Web applications
  • Middle-tier applications
  • Offline analytics
  • Storage and file serving

 

For applications such as Memcache, the ECX-1000 1.4GHz lacks bandwidth and memory capacity. Once a Cortex-A15 based server is available, this can change quickly as performance will improve significantly and the amount of memory per CPU can be quadrupled to 16GB.

We did not test it yet, but our own experience tells us that the majority of the "scale out" applications are out of reach. Especially in the financial and risk modeling world, top performance and ultra low response times are prioritized.

Calxeda based Boston servers are already making inroads as storage servers. There is little doubt that a low power processing unit makes a lot of sense in a storage server.

That leaves the question whether or not Calxeda's latest server can make it in the web server and Content Delivery world. Calxeda claims 5W per server node, and no more than 250W for the complete server chassis with 24 server nodes. That's pretty cool, but currently there is another solution. Two octal-core Xeon E5 deliver no less 32 threads running on top of 16 very potent cores. Add a virtualization layer and you get tens of servers. The only limitation is typically the amount of RAM.

So assume you are a hosting provider. Which server do you use as your building block? You've got two choices:

The standard one, the Intel Xeon E5 server. The advantages are excellent performance whenever you need it, whether your application scales well with more threads or not. The Xeon can address up to 384GB of affordable RAM (16GB DIMMs). If that's not enough, 768GB is possible with more expensive LR-DIMMs.

Those are impressive specs, but what if most of your customers just want to host medium sized web sites, sites that are rich on content but rather low on processing requirements? Can the Boston Viridis server attract such users with a much lower power consumption? How far can you go with slicing and dicing the Xeon's monstruous performance into small virtual pieces? We decided to find out.

 

Integer Processing, gcc Our Real World Test
Comments Locked

99 Comments

View All Comments

  • JohanAnandtech - Wednesday, March 13, 2013 - link

    Thanks!
  • SunLord - Wednesday, March 13, 2013 - link

    Hmm if these didn't cost $20,000 they would make a nice front end for larger websites and forums using less rack space and power. What setup using these would you use for anandtech? Would you guys keep the intel DB server?
  • Gunbuster - Wednesday, March 13, 2013 - link

    I just got a Dell R720xd decked out with 384GB and 4.3TB of storage for a hair over that price.
  • JohanAnandtech - Wednesday, March 13, 2013 - link

    Intel Xeons are still by far a better choice for relational databases that are very hard to split up (sharding is only a last resort)
  • zachj - Wednesday, March 13, 2013 - link

    I'm not sure I agree with the absolutism that seems imlicit in your comment that Xeons are better for relational databases...I think there are cases where that won't be true.

    Database scale-out doesn't always require sharding...using any of a number of different off-the-shelf capabilities built right into most SQL engines, you can create multiple active replicas of your database. This is generally better-suited to workloads that aren't write-intensive, but both clustering and replication allow for writes. While this may seem like a quick-and-dirty solution that is architecturally "less good" than sharding, hardware is a lot cheaper than paying people to design a sharding solution and the dollars very often drive the conversation. As long as the database size isn't terribly large this can be a very cost-effective way to scale out a database.

    I would wager that the Anandtech website database (not the forum database) would probably be well-suited to this type of scale-out. You do waste some money on redundant storage but you more than make up for that cost by not having to pay a development team to implement sharding. If the comments section of the Anandtech website gets stored in the same underlying database, the size constraints and the write activity may appear to be incompatible with this approach, but I would in fact argue that comments don't require relational capabilities of SQL and would be more rightly stored as blobs in Hadoop or Azure Storage Tables. Then the Anandtech database is strictly articles and is both much more compact and almost entirely read-only (except for a few new articles per day).
  • rwei - Friday, March 15, 2013 - link

    To the best of my understanding, replication does well for scaling reads but doesn't do much for writes. I'd still imagine that this would work decently well with AnandTech, where I can't see the volume of writes being that large relative to the volume of reads.
  • Kurge - Wednesday, March 13, 2013 - link

    They would make a horrible front end for such websites. Just buy a single Xeon server and don't artificially limit it by using 24 VMs. Just run the app straight on the metal and it will perform massively better.
  • Oldboy1948 - Wednesday, March 13, 2013 - link

    Very interesting Johan as your tests often are!
    Interesting that the memory bw is so much lower than anything from Intel. In fact Iphone 5 looks much better...why? Only Intel has about the same rsults in compress and decompress.
  • JohanAnandtech - Wednesday, March 13, 2013 - link

    Where did you see the stream results on the A6? I might have missed it somewhere. The only ones I could find reported only 1 GB/s in Triad. http://www.anandtech.com/show/6298/analyzing-iphon... The Quad ECX-1000 got 1.8 GB/s
  • PCTC2 - Wednesday, March 13, 2013 - link

    Do you know what would be an interesting concept for a future version of these cluster-in-a-box systems? A solution like ScaleMP. ScaleMP is basically a reverse VM. A hypervisor on each server clusters together to run a single OS with an aggregation of all resources (cores, RAM, network, and disk). ScaleMP running on 4x Dual-socket 8-core Xeon systems w/ 32GB RAM results in a usable system with 64-cores and 128GB RAM as if it was running natively on the hardware. This would be an interesting concept to transfer to the ARM space (if a form of hardware virtualization ever is designed). In a box like this, there would be 192 cores and 192GB of RAM available to a single Fedora instance. Cluster 2 of these together and suddenly there's a system with 384 cores and 384GB of RAM in 4U. Just some food for thought.

Log in

Don't have an account? Sign up now