IBM has  launched a very interesting product lately: the JS22 blade.  The IBM JS22 blade server has two dual-core Power6 CPUs, clocked at an impressive 4 GHz. Even more interesting is the honesty of the IBM marketing team about the Sun x4450 server. IBM took the Peak SpecInt2006 of several servers published on spec.org and made a price/performance (using the online price configurators at IBM, Sun and HP) and a performance divided by power (Source: online power calculators) comparison.
 
Now look at the results at IBM's sales presentation: 
 

 
So according to IBM's (based on SPEC) numbers, Sun has a winner here. The calculated power numbers are probably a bit higher than the real ones. Also, the Sun Fire x4450 did use the power hungry Intel x7350 CPUs. If one would use the 7340, you would get less than 20% performance loss with probably 50% less power. So, the numbers above could get even better for Sun's latest server. IBM feels that "security, reliability, operations and less cabling" should convince buyers to go the blade JS22 route anyway.
 
 
 
As we are currently testing the x4450, we are very curious about your opinion. Do you feel that density is important? With the exception of the UK, most public datacenters here in Western Europe are not charging a lot for rackspace. The reason is that as more and more larger companies have their own datacenter, public datacenters have a lot of unused rackspace. It really show the difference between reality and what the media is reporting. Most reports in the media are talking about how datacenters are running out of space and power. The latter seems to be true, but the space problems seems to be highly exaggerated.
 
So what is on top of your checklist when you shop for a new server? Performance/Watt? Does "less cabling" make an  impression on you? What about the rather vague statements like "reliability and operation"? Let us know.
 
 
 
 
 
 

POST A COMMENT

8 Comments

View All Comments

  • bakerzdosen - Thursday, May 22, 2008 - link

    OK, I forgot to comment on that (since you asked in the article).

    We've found the x4450's to be reliable... except we've got one bug that's killing us. It could be a Veritas Volume Manager bug, but at this point, I really couldn't say what it is with 100% certainty. Each system has crashed (hard) once in the past 6 months. Other than that, they've been fine. Our current state is not acceptable, but they are still much more reliable than any Windows box we've got out there doing the same job... (Our apps tend to push hardware a LOT - not like your typical Apache/PHP sort of apps FWIW.)

    All in all, I REALLY like them.
    Reply
  • bakerzdosen - Thursday, May 22, 2008 - link

    Well, I personally LOVE the x4450's. They simply amaze me at how fast they are. They are overkill for most of our customers, but when you compare them to something similar in the Sparc world (say, oh, I dunno, a v890), they are about 15-20% of the price.

    Admittedly it's apples (note lower case) and oranges, but it's just a fast machine...

    http://browse.geekbench.ca/geekbench2/top">http://browse.geekbench.ca/geekbench2/top
    Reply
  • MGSsancho - Saturday, May 03, 2008 - link

    Looking back at other comments. I would imaging all the issues you asked us are important to different people. how about looking into mixed blad offerings? Sun's blades apparently support Intel, AMD and Niagara procs in the same chasies, not cheap but nothing to scoff at either. Either way keep up the IT blogging =) Reply
  • tjoynt - Tuesday, April 29, 2008 - link

    In my experience with companies running their own datacenter, space can be an issue, but power and cooling are far more limiting. Indeed, greater density from blade or 1/2U 2/4-way multicore systems tends to exacerbate the power and cooling issues. Thus efficiency becomes critical, not because of the cost of the electricity, but the limits the datacenter infrastructure places on expansion. Reply
  • JohanAnandtech - Wednesday, April 30, 2008 - link

    That is indeed my experience too. It seems that power density is in many cases (there are exceptions: the ones that the two posters give as an example) "power density" is very important.

    The question is if there is any merit to a high performance blade... I feel in most cases it is best to go with a lower power blade instead. If you really need that kind of top performance, 1U are delivering now densities which rival high performance blade, but they are easier to cool. That is what brings me to my other question: how important is the reduced cabling with blades in your opinion?
    Reply
  • jnusbaum - Monday, April 28, 2008 - link

    In my industry (finance) many data centers are always full. By the time we get things built we already need/have way more machines than we planned for. Remote sites and colo are options but can't be used for a lot of things because of security and bandwidth (into colo and remote sites) considerations. So yes density matters and it is always good to use less space rather than more all other things being equal (which they aren't). Reply
  • Marquis - Monday, April 28, 2008 - link

    While I certainly wouldn't call this "typical" any sense of the word, I recently did some work where density was a *huge* deal.

    Essentially, we needed to pack in about 13K systems (not a typo) into as little space as possible.

    Unfortunately, vendor choice was somewhat limited, so we ended up speccifying HP blade servers. But that certainly ended up saving quite a bit of rack space.
    Reply
  • MGSsancho - Monday, April 28, 2008 - link

    if 1 person or 2 people can service a box. and i would imagine cost of parts. commodity parts are awesome. but i work for companies that are cheap cheap cheap so my opinion prolly doesn't mater =P Reply

Log in

Don't have an account? Sign up now