• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

AMD unveiled their Opteron 6300 series server processors, code name Abu Dhabi, back in November 2012. At that time, no review samples were available. The numbers that AMD presented were somewhat confusing, as the best numbers were produced running the hard to assess SPECJbb2005 benchmark; the SPEC CPU2006 benchmarks were rather underwhelming.

Compared to an Opteron 6278 at 2.4GHz, the Opteron 6380 (2.5GHz) performed 24% better, performance per Watt improved by 40% according to AMD. In contrast, SPECint_Rate2006 improved by only 8%, and SPECfp_Rate2006 by 7%. However, it is important to note that SPECCPU2006 rates do not scale well with clockspeed. For example an 8% clockspeed (6380 vs 6376) only results in a 3.5% higher SPECint_Rate2006 and a 3% higher SPECfp_Rate2006. And the SPEC CPU 2006 benchmarks were showing the Interlagos Opteron at its best anyway. You can read our analysis here.

Both benchmarks have only a distant link to real server workloads, and we could conclude only two things. Firstly, performance per GHz has improved and power consumption has gone down. Secondly, we are only sure that this is the case with well optimized, even completely recompiled code. The compiler setting of SPEC CPU 2006, the JVM settings of Specjbb: it is all code that does not exist on servers which are running real applications.

So is the new Opteron "Abu Dhabi" a few percent faster or is it tangibly faster when running real world code? And are the power consumption gains marginal at best or measureable? Well, most of our benchmarks are real world, so we will find out over the next several pages as we offer our full review of the Opteron 6300.

Positioning: SKUs and Servers
POST A COMMENT

55 Comments

View All Comments

  • coder543 - Wednesday, February 20, 2013 - link

    You realize that we have no trouble recognizing that you've posted about fifty comments that are essentially incompetent racism against AMD, right?

    AMD's processors aren't prefect, but neither are Intel's. And also, AMD, much to your dismay, never announced they were planning to get out of the x86 server market. They'll be joining the ARM server market, but not exclusively. I'm honestly just ready for x86 as a whole to be gone, completely and utterly. It's a horrible CPU architecture, but so much money has been poured into it that it has good performance for now.
    Reply
  • Duwelon - Thursday, February 21, 2013 - link

    x86 is fine, just fine. Reply
  • coder543 - Wednesday, February 20, 2013 - link

    totes, ain't nobody got time for AMD. they is teh failzor.

    (yeah, that's what I heard when I read your highly misinformed argument.)
    Reply
  • quiksilvr - Wednesday, February 20, 2013 - link

    Obvious trolling aside, looking at the numbers and its pretty grim. Keep in mind that these are SERVER CPUs. Not only is Intel doing the job faster, its using less energy, and paying a mere $100-$300 more per CPU to cut off on average 20 watts is a no-brainer. These are expected to run 24 hours a day, 7 days a week with no stopping. That power adds up and if AMD has any chance to make any dent in the high end enterprise datacenters they need to push even more. Reply
  • Beenthere - Wednesday, February 20, 2013 - link

    You must be kidding. TCO is what enterprise looks at and $100-$300 more per CPU in addition to the increased cost of Intel based hardware is precisely why AMD is recovering server market share.

    If you do the math you'll find that most servers get upgraded long before the difference in power consumption between an Intel and AMD CPU would pay for itself. The total wattage per CPU is not the actual wattage used under normal operations and AMD has as good or better power saving options in their FX based CPUs as Intel has in IB. The bottom line is those who write the checks are buying AMD again and that's what really counts, in spite of the trolling.

    Rory Read has actually done a decent job so far even though it's not over and it has been painful, especially to see some talent and loyal AMD engineers and execs part ways with the company. This happens in most large company reorganizations and it's unfortunate but unavoidable. Those remaining at AMD seem up for the challenge and some of the fruits of their labor are starting to show with the Jaguar cores. When the Steamroller cores debut later this year, AMD will take another step forward in servers and desktops.
    Reply
  • Cotita - Wednesday, February 20, 2013 - link

    Most servers have a long life. You'll probably upgrade memory and storage, but CPU is rarely upgraded. Reply
  • Guspaz - Wednesday, February 20, 2013 - link

    Let's assume $0.10 per kilowatt hour. A $100 price difference at 20W would take 1000 kWh, which would take 50,000 hours to produce. The price difference would pay for itself (at $100) in about 6 years.

    So yes, the power savings aren't really enough to justify the cost increase. The higher IPC on the Intel chips, however, might.
    Reply
  • bsd228 - Wednesday, February 20, 2013 - link

    You're only getting part of the equation here. That extra 20w of power consumed mostly turns into heat, which now must be cooled (requiring more power and more AC infrastructure). Each rack can have over 20 2U servers with two processors each, which means nearly an extra kilowatt per rack, and the corresponding extra heat.

    Also, power costs can vary considerably. I was at a company paying 16-17cents in Oakland, CA. 11 cents in Sacramento, but only 2 cents in Central Washington (hydropower).
    Reply
  • JonnyDough - Wednesday, February 20, 2013 - link

    +as many as I could give. Best post! Reply
  • Tams80 - Wednesday, February 20, 2013 - link

    I wouldn't even ask the NYSE for the time day. Reply

Log in

Don't have an account? Sign up now