Decision Support benchmark: Nieuws.be

Decision Support benchmark Nieuws.be
Operating System Windows 2008 Enterprise RTM (64 bit)
Software SQL Server 2008 Enterprise x64 (64 bit)
Benchmark software vApus + real-world "Nieuws.be" Database
Database Size > 100GB
Typical error margin 1-2%

 

The Flemish/Dutch Nieuws.be site is one of the newest web 2.0 websites, launched in 2008. It gathers news from many different sources and allows the reader to completely personalize his view on all this news. Needles to say, the Nieuws.be site is sitting of top of a pretty large database, more than 100 GB and growing. This database consists of a few hundred separate tables, which have been carefully optimized by our lab (the Sizing Server Lab).

Almost all of the load on the database are selects (99%), about 5% of them are stored procedures. Network traffic averages 6.5MB/s and peaks at 14MB/s. So our Gigabit network connection has still a lot of headroom. Disk Queue Length (DQL) is at 2 in the first round of tests, but we only report the results of the subsequent rounds where the database is in a steady state. We measured a DQL close to 0 during these tests, so there is no tangible intervention of the harddisks.

We now use an new even heavier log. As the Nieuws.be application became more popular and more complex, the database has grown and queries have become more complex too. The results are no longer comparable to previous results. They are similar, but much lower. 

Nieuws.be MS SQL Server 2008 - New Heavy log!

Pretty amazing performance here. And while AMD gets a pat on the back, it is the hard working people of Microsoft SQL Server team we should send our kudos to. Our calculations show that SQL Server adds about 80% of performance when adding an extra 12 cores, which is simply awesome scaling. The result of this scaling is that for once, you can notice which CPUs have real cores vs. ones that have virtual (Hyper Threading) cores: the 12-core Opteron 6174 outperforms the best Xeon by 20%. The people with transaction databases should go for the Intel CPUs, while the data miners should consider the latest Opteron. The architectures that AMD and Intel have chosen are complete opposites, and the result is that the differences between the different software categories are very dramatic. Profile your software before you make a choice! It has never been so important.

 

SAP S&D 2-Tier Virtualization & Consolidation
Comments Locked

58 Comments

View All Comments

  • wolfman3k5 - Monday, March 29, 2010 - link

    Great review! Thanks for the review, when will you guys be reviewing the AMD Phenom II X6 for us mere mortals? I wonder how the Phenom II X6 will stack up against the Core i7 920/930.

    Keep up the good work!
  • ash9 - Tuesday, March 30, 2010 - link

    Since SSE4.1,SSE4.2 are not in AMD's , its Andand's way of getting an easy benchmark win, seeing some of these benchmark test probably use them-

    http://blogs.zdnet.com/Ou/?p=719
    August 31st, 2007
    SSE extension wars heat up between Intel and AMD

    "Microprocessors take approximately five years to go from concept to product and there is no way Intel can add SSE5 to their Nehalem product and AMD can’t add SSE4 to their first-generation 45nm CPU “Shanghai” or their second-generation 45nm “Bulldozer” CPU even if they wanted to. AMD has stated that they will implement SSE4 following the introduction of SSE5 but declined to give a timeline for when this will happen."

    asH
  • mariush - Tuesday, March 30, 2010 - link

    One of the best optimized and multi threaded applications out there is the open source video encoder x264.

    Would it be possible to test how well 2 x 8 and 2x12 amd configurations work at encoding 1080p video at some very high quality settings?

    A workstation with 24 cores from AMD would cost almost as much as a single socket 6 cores system from Intel so it would be interesting to see if the increase in frequency and the additional SSE instructions would be more advantage than the number of cores.
  • Aclough - Tuesday, March 30, 2010 - link

    I wonder if the difference between the Windows and Linux test results is related to the recentish changes in the scheduler? From what I understand the introduction of the CFS in 2.6.23 was supposed to be really good for large numbers of cores, and I'm given to understand that before that the Linux scheduler worked similarly to the recent Windows one. It would be interesting to try running that benchmark with a 2.6.22 kernel or one with the old O(1) patched in.

    Or it could just be that Linux tends to be more tuned for throughput whereas Windows tends to be more tuned for low latency. Or both.
  • Aclough - Tuesday, March 30, 2010 - link

    In any event, the place I work for is a Linux shop and our workload is probably most similar to Blender, so we're probably going to continue to buy AMD.
  • ash9 - Tuesday, March 30, 2010 - link

    http://www.egenera.com/pdf/oracle_benchmarks.pdf


    "Performance testing on the Egenera BladeFrame system has demonstrated that the platform
    is capable of delivering high throughput from multiple servers using Oracle Real Application
    Clusters (RAC) database software. Analysis using Oracle’s Swingbench demonstration tool
    and the Calling Circle schema has shown very high transactions-per-minute performance
    from single-node implementations with dual-core, 4-socket SMP servers based on Intel and
    AMD architectures running a 64-bit-extension Linux operating system. Furthermore, results
    demonstrated 92 percent scalability on either server type up to at least 10 servers.
    The BladeFrame’s architecture naturally provides a host of benefits over other platforms
    in terms of manageability, server consolidation and high availability for Oracle RAC."
  • nexox - Tuesday, March 30, 2010 - link

    It could also be that Linux has a NUMA-aware scheduler, so it'd try to keep data stored in ram which is connected to the core that's running the thread which needs to access the data. I probably didn't explain that too well, but it'd cut down on memory latency because it would minimize going out over the HT links to fetch data. I doubt that Windows does this, given that Intel hasn't had NUMA systems for very long yet.

    I sort of like to see more Linux benchmarks, since that's really all I'd ever consider running on data center-class hardware like this, and since apparently Linux performance has very little to do with Windows performance, based on that one test.
  • yasbane - Wednesday, May 19, 2010 - link

    Agreed. I do find it disappointing that they put so few benchmarks for Linux for servers, and so many for windows.

    -C
  • jbsturgeon - Tuesday, March 30, 2010 - link

    I like the review and enjoyed reading it. I can't help but feel the benchmarks are less a comparison of CPU's and more a study on how well the apps can be threaded as well as the implementation of that threading -- higher clocked cpus will be better for serial code and more cores will win for apps that are well threaded. In scientific number crunching (the code I write ), more cores always wins (AMD). We do use Fluent too, so thanks for including those benchamarks!!
  • jbsturgeon - Tuesday, March 30, 2010 - link

    Obviously that rule can be altered by a killer memory bus :-).

Log in

Don't have an account? Sign up now