Price comparison & Final Words

In previous articles, we've taken a look at the cost of the processor itself. Since servers aren't just about the processor, we've taken our pricing to an entire platform. We've attempted to spec out Intel and AMD servers from 2 different vendors and have them as close as possible in terms of features. There are obviously a few differences here and there, but as illustrated below, the price difference is negligible between either platform when taking into account the features missing on either platform. Note that we are comparing Dual Intel 3.6 1MB L2 based servers against Dual Opteron 250 servers, since the newer products that we have discussed in this article are not yet in the retail channel.

   HP ProLiant DL360 SCSI  HP ProLiant DL145 SCSI  IBM xSeries 336  IBM eServer 326
Platform Intel AMD Intel AMD
CPU Dual 3.6 GHz 1MB L2 Dual Opteron 250 (2.4GHz) Dual 3.6 Ghz 1MB L2 Dual Opteron 250 (2.4 GHz)
Memory 2GB 2GB 2GB 2GB
Hard Drive 36.4 Pluggable Ultra320 (15,000 RPM) 36.4 Non Pluggable Ultra320 (15,000 RPM) IBM 36GB 2.5" 10K SCSI HDD HS 36GB 10K U320 SCSI HS Option
SCSI Controller Smart Array 6i Plus controller (onboard) Dual Channel Ultra 320 SCSI HBA Integrated Single-Channel Ultra320 SCSI Controller (Standard) Integrated Single-Channel Ultra320 SCSI Controller (Standard)
Bays Two Ultra 320 SCSI Hot Plug Drive Bays Two non-hot plug hard drive bays 4 hot swap bays 2 hot swap bays
Network NC7782 PCI-X Gigabit NICs (embedded) Broadcom 5704 Gigabit Nics (embedded) Dual integrated 10/100/1000 Mbps Ethernet (Standard) Dual integrated 10/100/1000 Mbps Ethernet (Standard)
Power 460W hot pluggable power supply 500W non hot plug power supply 585W power supply 411W Power Supply (Standard)
Server Management SmartStart & Insight Manager None System Management Processor (Standard) System Management Processor (Standard)
OS None None None None
Cost $5,946 $5,009 $5,476 $5,226

Final words

We've illustrated how workload has a significant effect on platform decision when it comes to database servers. Obviously, for a small to medium business, where there are multiple different workloads being run on the same server, the decision to go with a platform architecture best suited for Data warehousing alone doesn't make sense. But for larger organizations where multiple database servers are used, each having a specific purpose, the decision to go with one platform or another could have a significant impact on performance. For dual-processor applications, Intel leads the way in everyday small to heavy transactional applications, whereas AMD shines in the analytical side of database applications "Data Warehousing".

These results do raise some questions as to what is going on exactly during each test at an architectural level. Is the processor waiting for data from the L2 cache? Is the processor branch prediction units not suited for this particular workload? Is there a bottleneck with memory latency? We want these questions answered, and are going to investigate ways to provide concrete answers to these tough questions in the future.


Data Warehouse results
Comments Locked

97 Comments

View All Comments

  • Viditor - Monday, February 14, 2005 - link

    "DMA operations initiated by a peripheral device that does not directly support 64-bit addressing will have performance issues"

    I'm not sure you are correct in this...I believe the issue is

    "physical addresses above 4GB (32 bits) cannot reliably be the source or destination of DMA operations"

    I found another article that explains my concern quite well...
    http://www.spodesabode.com/content/article/nocona/...

    "Unlike the Itanium, which is solely a 64-Bit processor, these chips have the ability to run in both 32-Bit and 64-Bit mode. Some devices, such as a large majority of PCI cards cannot directly access memory above the 4GB point. To solve this, the software has to ensure the physical memory address is below the 4GB point. AMD solved this solution by using a hardware IOMMU, which is effectively a "bounce buffer" or look-up table of physically memory addresses corresponding to a virtual address that is given to the incompatible hardware, allowing it to use memory above the 4GB barrier.
    Intels solution isn't quite as elegant. If a device needs to access memory above the 4GB point, the data is just copied from wherever it is, to a fixed location below the 4GB point. This takes time and can reduce performance. In extreme cases we have heard there could be as much as 30-50% decrease in performance on the Nocona platform"

    This does not appear to be a 64bit driver issue to me as no mention in any of the access scenarios is described as 32bit...
  • Accord99 - Monday, February 14, 2005 - link

    It's an Intel thing I think, for why they don't have an IOMMU. Even their chipsets for the Itanium 2 don't have one while HP and SGI's chipsets do. Or perhaps Intel just wants (and has the power to force) peripheral manufacturer's to make proper 64-bit devices and drivers.
  • Viditor - Monday, February 14, 2005 - link

    OIC what you are saying...and yes, it's a problem with the chipset. Of course that is exactly what I said in the first place...

    "Because there is still no hardware IOMMU on Xeon chipsets"

    The big question is, why hasn't Intel fixed this?
    I can only assume that it is a design problem for them that is inherent to EM64T...
    I can't imagine that they would just let this slide on their chipset development.
    What that problem is, I have no idea...I would just like to see what effect it has on system function.
  • Accord99 - Monday, February 14, 2005 - link

    The linuxhardware article supports what I'm saying, DMA operations initiated by a peripheral device that does not directly support 64-bit addressing will have performance issues. Server-level peripherals typically support 64-bit addressing and it is not a problem with the CPU, or the EMT64 instruction set, it is a problem with the chipset. It does not affect the Xeon's ability to addres flatly >4GB of memory.
  • Viditor - Monday, February 14, 2005 - link

    I don't believe I do...
    I have read that post before, and I don't see your point.

    Try reading this article to understand what I'm saying:
    http://www.linuxhardware.org/article.pl?sid=04/10/...

    “Software IOTLB — Intel EM64T does not support an IOMMU in hardware while AMD64 processors do. This means that physical addresses above 4GB (32 bits) cannot reliably be the source or destination of DMA operations. Therefore, the Red Hat Enterprise Linux 3 Update 2 kernel "bounces" all DMA operations to or from physical addresses above 4GB to buffers that the kernel pre-allocated below 4GB at boot time. This is likely to result in lower performance for IO-intensive workloads for Intel EM64T as compared to AMD64 processors.”
    Although this shouldn't affect people that run with under 4GB of memory, this is an important point to note. If you do ever need the extra memory, you may take a performance hit. Unfortunately, we do not have over 4GB of DDR2 memory here today so we will not be able to test how much of a hit you would take if any"

    The bottom line is that many believe (including myself) the physical addressing will be a significant problem, and many (including you) don't.
    That's why I have requested that AT do an actual test...nothing like reality to settle a discussion...:-)

    BTW, thanks for correcting my typo...
  • Dubb - Monday, February 14, 2005 - link

    /taps fingers impatiently waiting on rendering benchmarks...

    which hopefully include (hint hint)

    mental ray
    brazil
    renderman
  • Accord99 - Monday, February 14, 2005 - link

    Your understanding of the IOMMU is wrong. Please refer to this thread:

    http://realworldtech.com/forums/index.cfm?action=d...

    Also, the Xeon supports 36-bits.
  • Viditor - Monday, February 14, 2005 - link

    Accord99 - "The IOMMU is only used for peripherals that don't support 64-bit addressing"

    The IOMMU is a memory mapping unit sitting between the I/O bus and physical memory. While the memory controller of the Xeon can address 64bit, it uses PAE to do so because current chipsets only address 32 bits. The on-die memory controller for AMD64 chips address 40 bits...
  • Accord99 - Monday, February 14, 2005 - link

    The hardware IOMMU has no impact on the Xeon's ability to flat address >4GB of memory. The IOMMU is only used for peripherals that don't support 64-bit addressing, ie USB 2.0 cards, EIDE controllers, soundcards, some network controllers and will reduce IO performance for these devices. High-end 64-bit SCSI controllers, gigabit network controllers and newer SATA controllers all support 64-bit addressing and run at full performance.
  • Viditor - Monday, February 14, 2005 - link

    One other request (if it's possible)...
    Just so we can get a well-rounded view on the results, is it possible for you to do a Solaris/Oracle and Linux/MySQL (or Linux/Postgres) test?
    I realise that I'm asking a lot, but if you have the time...:-)

Log in

Don't have an account? Sign up now