Conclusion

It has been more than three years since the previous generation Xeon E7 hit the market. IBM and Oracle have overtaken the old Xeon E7 since then and an update was long overdue. Since then, Intel has launched two new architectures in the dual socket server CPU market: the Intel E5-2600, based on the "Sandy Bridge" architecture, and the Intel E5-2600 v2 ("Ivy Bridge").

The new Xeon core has already shown its worth in the dual socket Xeon E5-2600 v2 based servers. It is interesting to note that both architecture updates, Sandy Bridge and Ivy Bridge, although relatively subtle on their own, have increased the integer performance of each individual core by 30%. The many subtle changes also increase the performance/watt, and the excellent 22nm process technology enables a 50% higher core count. The end result is that the general computing performance has doubled in scalable integer applications (SAP) and tripled in floating point applications. There is more.

We are entering the big data era, and the result is a strong and renewed interest in (almost) real-time data mining. One of the prime examples is SAP with the in memory and compressed database platform SAP HANA. Both Microsoft with SQL Server 2014 and IBM with DB2 10.5—with the so called BLU acceleration—are following suit. Therefore, it is likely that there will be a strong demand for a server platform with massive RAM capacity. The new quad socket Xeon servers can offer up to 3TB of RAM with relatively affordable 32GB DIMM technology and no less than 6TB with the ultra-expensive 64GB LR-DIMMs. That is another reason why the Intel Xeon E7 v2 platform will be more attractive than much more expensive RISC servers that are typically limited to 1-2TB.

Overall, Intel's launch of the tried and proven Ivy Bridge cores looks ready to set a new level of performance expectations. Ivy Bridge EX may seem awfully late compared to the IVB and IVB EP releases, but that's typical of this server segment. The Xeon E7 v2 chips are slated to remain in data centers for the next several years as the most robust—and most expensive—offerings from Intel. If you can use more smaller servers instead of a few large servers, that will certainly be more cost effective, but the types of applications typically run on these servers and the demands of the software can frequently make the hardware costs a secondary consideration.

HPC: OpenFoam
Comments Locked

125 Comments

View All Comments

  • Brutalizer - Tuesday, February 25, 2014 - link

    Clusters can not replace SMP servers. Clusters can not run SMP workloads.
  • Kevin G - Tuesday, February 25, 2014 - link

    I'm sorry, but it is considered best practice to run databases in pairs for redundancy. For example, here is an Oracle page explaining how clustering is used to maintain high availability: http://docs.oracle.com/cd/B28359_01/server.111/b28...

    Other databases like MySQL and MS SQL Server have similar offerings.

    There is a reason why big hardware like this is purchased in pair or sets of three.
  • EmmR - Friday, March 14, 2014 - link

    Kevin G. you are actually correct. We are in the process for comparing performance of Power7+ vs Xeon v2 for SAP batch workload and we got pretty much the same arguments from our AIX guys as Brutalizer mentionned.

    We are using real batch jobs rather than an synthetic benchmark and we set up each system to compare core-for-core, down to running a memory defrag on the Power system to make sure memory access is a good as possible. The only thing we could not fix is that in terms of network access, the Intel system was handicapped.

    What we are seeing is that we can tune the Intel system to basically get similar performance (< 5% difference of total runtime) than from the Power7+ system (P780). This was quite unexpected but it's an illustration of how far Intel and the hardware vendors building servers/blades based on those CPUs have come.
  • Kevin G - Monday, March 17, 2014 - link

    Looking at the Xeon E7 V2's right now is wise since they're just hitting market and the core infrastructure is expected to last three generations. It wouldn't surprise me if you can take a base system today using memory daughter cards and eventually upgrade it to Broadwell-EX and more DDR4 memory by the end of the product life cycle. This infrastructure is going to be around for awhile.

    POWER7+ on the other hand is going to be replaced by the POWER8 later this year. I'd expect it to perform better than the POWER7+ though how much will have to wait for the benchmarks after it is released. There is always going to be something faster/better/cheaper coming down the road in the computing world. Occasionally waiting makes sense due to generational changes like this. Intel and IBM tend to leap frog each other and it is IBM's turn to jump.

    Ultimately if you gotta sign the check next week, I'd opt for the Xeon but if you can hold off a few months, I'd see what the POWER8 brings.
  • EmmR - Monday, March 17, 2014 - link

    Power8 will be interesting to look at, but based on current data it will have to yield a pretty impressive performance boost over Power7+ (and Xeon v2) in order to be competitive on a performance per dollar spent.
  • Kevin G - Monday, March 17, 2014 - link

    IBM is claiming two to three times the throughput over POWER7+. Most of that gain isn't hard to see where it comes from: increasing the core count from 8 to 12. That change alone will put it ahead of the Xeon E7 v2's in terms of raw performance. Minor IPC and clock speed increases are expected too. The increase from 4 way to 8 way SMT will help some workloads, though it could also hurt others (IBM does support dynamic changes in SMT so this is straightforward to tune). The rest will likely come from system level changes like lower memory access times thanks to the L4 cache on the serial-to-parallel memory buffer and more bandwidth all around. What really interests me is that IBM is finally dropping the GX bus they introduced for coherency in the POWER4. What the POWER8 does is encapsulates coherency over a PCIe physical link. It'll be interesting to see how it plays out.

    As you may suspect, the cost of this performance may be rather high. We'll have to see when IBM formally launches systems.
  • amilayajr - Thursday, March 6, 2014 - link

    I think Brutalizer is saying that, this new Xeon CPU is pretty much for targeted market. Unix since then has been the backbone of the internet, Intel as much as they can they want to cover the general area of server market. Sure it's a nice CPU, but as reliability goes, I would rather use a slower system but reliable in terms of calculations. I would still give intel the thumbs up for trying something new or updating the cpu. As for replacing unix servers for large database enterprise servers, probably not in a long time for intel. I would say to intel to leave on the real experts on this area that just focuses on these market. Intel is just covering their turf for smaller scale server market.
  • Kevin G - Thursday, March 6, 2014 - link

    The x86 servers have caught up in RAS features. High end features like hot memory add/remove are available on select systems. (Got a bad DIMM? Replace it while the systems is running.) Processor add/remove on a running system is also possible on newer systems but requires some system level support (though I'm not immediately familiar with a system offering it.) In most cases with the base line RAS features, Xeons are more than good enough for the job. Hardware lockstep is also an option on select systems.

    Uses for ultra high end features like two bit error correction for memory, RAID5-like parity across memory channels, and hot processor add/remove are a very narrow niche. Miscellaneous features like instruction replay don't actually add much in terms of RAS (replay on Itanium is used mainly to fill up unused instruction slots in its VLIW architecture, where as lock step would catch a similar error in all cases). Really, the main reason to go with Unix is on the software side, not the hardware side anymore.
  • djscrew - Wednesday, March 12, 2014 - link

    "Sound like we are solving a problem with hardware instead of being innovative in software."

    that doesn't happen... ever... http://www.anandtech.com/show/7793/imaginations-po... ;)
  • mapesdhs - Sunday, February 23, 2014 - link


    Brutalizer writes;
    "Some examples of Scale-out servers (clusters) are all servers on the Top-500 supercomputer list. Other examples are SGI Altix / UV2000 servers or the ScaleMP server, they have 10,000s of cores and 64 TB RAM or more, i.e. cluster. Sure, they run a single unified Linux kernel image - but they are still clusters. ..."

    Re the UV, that's not true at all. The UV is a shared memory system with a hardware MPI
    implentation. It can scale codes well beyond just a few dozen sockets. Indeed, some key
    work going on atm is how to scale relevant codes beyond 512 CPUs, not just 32 or 64.
    The Cosmos installation is one such example. Calling a UV a cluster is just plain wrong.
    Its shared memory architecture means it can handle very large datasets (hundreds of
    GB) and extremely demanding I/O workloads; no conventional 'cluster' can do that.

    Ian.

Log in

Don't have an account? Sign up now