Micro CPU benchmarks: isolating the FPU

But you can't compare an Intel PC with an Apple. The software might not be optimised the right way." Indeed, it is clear that the Final Cut Pro, owned by Apple, or Adobe Premiere, which is far better optimised for the Intel PC, are not very good choices to compare the G5 with the x86 world.

So, before we start with application benchmarks, we performed a few micro benchmarks compiled on all platforms with the same gcc 3.3.3 compiler.

The first one is flops. Flops, programmed by Al Aburto, is a very floating-point intensive benchmark. Analyses show that this benchmark contains:
  • 70% floating point instructions;
  • only 4% branches; and
  • Only 34% of instructions are memory instructions.
Note that some of those 70% FP instructions are also memory instructions. Benchmarking with Flops is not real world, but isolates the FPU power.

Al Aburto, about Flops:
" Flops.c is a 'C' program which attempts to estimate your systems floating-point 'MFLOPS' rating for the FADD, FSUB, FMUL, and FDIV operations based on specific 'instruction mixes' (see table below). The program provides an estimate of PEAK MFLOPS performance by making maximal use of register variables with minimal interaction with main memory. The execution loops are all small so that they will fit in any cache."
Flops shows the maximum double precision power that the core has, by making sure that the program fits in the L1-cache. Flops consists of 8 tests, and each test has a different, but well known instruction mix. The most frequently used instructions are FADD (addition), FSUB (subtraction) and FMUL (multiplication). We used gcc -O2 flops.c -o flops to compile flops on each platform.

MODULE FADD FSUB FMUL FDIV Powermac G5 2.5 GHz Powermac G5 2.7 GHz Xeon Irwindale 3.6 GHz Xeon Irwindale 3.6 w/o SSE2* Xeon Galatin 3.06 GHz Opteron 250 2.4 GHz
1 50% 0% 43% 7% 1026 1104 677 1103 1033 1404
2 43% 29% 14% 14% 618 665 328 528 442 843
3 35% 12% 53% 0% 2677 2890 532 1088 802 1955
4 47% 0% 53% 0% 486 522 557 777 988 1856
5 45% 0% 52% 3% 628 675 470 913 995 1831
6 45% 0% 55% 0% 851 915 552 904 1030 1922
7 25% 25% 25% 25% 264 284 358 315 289 562
8 43% 0% 57% 0% 860 925 1031 910 1062 1989
Average: 926 998 563 817 830 1545

The results are quite interesting. First of all, the gcc compiler isn't very good in vectorizing. With vectorizing, we mean generating SIMD (SSE, Altivec) code. From the numbers, it seems like gcc was only capable of using Altivec in one test, the third one. In this test, the G5 really shows superiority compared to the Opteron and especially the Xeons.

The really funny thing is that the new Xeon Irwindale performed better when we disabled support for the SSE-2, and used the "- mfpmath=387" option. It seems that the GCC compiler makes a real mess when it tries to optimise for the SSE-2 instructions. One can, of course, use the Intel compiler, which produces code that is up to twice as fast. But the use of the special Intel compiler isn't widespread in the real world.

Also interesting is that the 3.06 GHz Xeon performs better than the Xeon Irwindale at 3.6 GHz. Running completely out of the L1-cache, the high latency (4 cycles) of the L1-cache of Irwindale hurts performance badly. On the Galatin Xeon, which is similar to Northwood, Flops benefits from the very fast 2-cycle latency.

The conclusion is that the Opteron has, by far, the best FPU, especially when more complex instructions such a FDIV (divisions) are used. When the code is using something close to the ideal 50% FADD/FSUB and 50% FMUL mix and is optimised for Altivec, the G5 can roll its muscles. The normal FPU is rather mediocre though.

Micro CPU benchmarks: isolating the Branch Predictor

To test the branch prediction, we used the benchmark " Queens". Queens is a very well known problem where you have to place n chess Queens on an n x n board. The catch is that no single Queen must be able to attack the other. The exhaustive search strategy for finding a solution to placing the Queens on a chess board so they don't attack each other is the algorithm behind this benchmark, and it contains some very branch intensive code.

Queens has about:
  • 23% branches
  • 45% memory instructions
  • No FP operations
On a PIII, the Branch misprediction rate is up to 19%! (Typical: 9%) Queens runs perfectly in the L1-cache.

RUN TIME (sec)
Powermac G5 2.5 GHz 134.110
Xeon Irwindale 3.6 GHz 125.285
Opteron 250 2.4 GHz 103.159

At 2.7 GHz, the G5 was just as fast as the Xeon. It is pretty clear that despite the enormous 31 stage pipeline, the fantastic branch predictor of the "Xeon Pentium 4" is capable of keeping the damage to a minimum. The Opteron's branch predictor seems to be at the level of G5's: the branch misprediction penalty of the G5 is 30% higher, and the Opteron does about 30% better.

The G5 as workstation processor

It is well known that the G5 is a decent workstation CPU. The G5 is probably the fastest CPU when it comes to Adobe After Effects and Final Cut Pro, as this kind of software was made to be run on a PowerMac. Unfortunately, we didn't have access to that kind of software.

First, we test with Povray, which is not optimised for any architecture, and single-threaded.

Povray
Seconds
Dual Opteron 250 (2.4 GHz) 804
Dual Xeon DP 3.6 GHz 1169
Dual G5 2.5 GHz PowerMac 1125
Dual G5 2.7 GHz PowerMac 1049

Povray runs mostly out of the L2- and L1-caches and mimics almost perfectly what we have witnessed in our Flops benchmarks. As long as there are little or no Altivec or SSE-2 optimisations present, the Opteron is by far the fastest CPU. The G5's FPU is still quite a bit better than the one of the Xeon.

The next two tests are the only 32 bit ones, done in Windows XP on the x86 machines.

Lightwave 8.0
Raytrace
Lightwave 8.0
Tracer Radiosity
Dual Opteron 250 (2,4 GHz) 47 204
Dual Xeon DP 3,6 GHz 47.3 180
Dual G5 2,5 GHz PowerMac 46.5 254

The G5 is capable of competing in one test. Lightwave rendering engine has been meticulously optimised for SSE-2, and the " Netburst" architecture prevails here. We have no idea how much attention the software engineers gave Altivec, but it doesn't seem to be much. This might of course be a result of Apple's small market share.

Cinema 4D
Cinebench
Dual Opteron 250 (2.4 GHz) 630
Dual Xeon DP 3.6 GHz 682
Dual G5 2.5 GHz PowerMac 638
Dual G5 2.7 GHz PowerMac 682

Maxon has invested some time and effort to get the Cinema4D engine running well on the G5 and it shows. The G5 competes with the best x86 CPUs.

Benchmark configuration The G5 as Server CPU
Comments Locked

116 Comments

View All Comments

  • edchi - Tuesday, June 26, 2007 - link


    I haven't tried this yet, but will do tomorrow. Here is what Apple suggests to create a better MySQL installation:

    http://docs.info.apple.com/article.html?artnum=303...">http://docs.info.apple.com/article.html?artnum=303...
  • grantma - Tuesday, April 18, 2006 - link

    I found Gnome was a lot more snappy than OS X desktop under Debian PowerPC. You could tell the kernel was far faster using Linux 2.6 - programs would just start immediately.
  • heaneyforestrntpe68 - Thursday, October 21, 2021 - link

    At least the non-ECC RAM, that is. https://bit.ly/2XwdzPt
  • pecosbill - Wednesday, June 15, 2005 - link

    I'm not going to waste my time searching to see if these same comments below were made already, but the summary of them is those who are performance oriented tune their code for a CPU. You can do the same for an OS. Also, the "Big Mac" cluster in VA tech speaks otherwise to raw performance as OS X was the OS of choice. From macintouch.com:

    Okay, stop, I have to make an argument about why this article fails, before I explode. MySQL has a disgusting tendency to fork() at random moments, which is bad for performance essentially everywhere but Linux. OS X server includes a version of MySQL that doesn't have this issue.
    No real arguments that Power Macs are somewhat behind the times on memory latency, but that's because they're still using PC3200 DDR1 memory from 2003. AMD/Intel chips use DDR2 or Rambus now ... this could be solved without switching CPUs.
    The article also goes out of its way to get bad results for PPC. Why are they using an old version of GCC (3.3.x has no autovectorization, much worse performance on non-x86 platforms), then a brand spanking new version of mySQL (see above)? The floating point benchmark was particularly absurd:

    "The results are quite interesting. First of all, the gcc compiler isn't very good in vectorizing. With vectorizing, we mean generating SIMD (SSE, Altivec) code. From the numbers, it seems like gcc was only capable of using Altivec in one test, the third one. In this test, the G5 really shows superiority compared to the Opteron and especially the Xeons"

    In fact, gcc 3.3 is unable to generate AltiVec code ANYWHERE, except on x86 where they added a special SSE mode because x87 floating point is so miserable. This could have been discovered with about 5 minutes of Google research. It wouldn't had to have been discovered at all if they hadn't gone out of their way to use a compiler which is the non-default on OS X 10.4. Alarm bells should have been going off in the benchmarkers head when an AMD chips outperforms an Intel one by 3x, but, anyway ...
    I hate to seem like I'm just blindly defending Apple here, but this article seems to have been written with an agenda. There's no way one guy could stuff this much stuff up. To claim there's something inherently wrong with OS X's ability to be a server is going against so much publicly available information it's not even funny. Notice Apple seems to have no trouble getting Apache to run with Linux-like performance: [Xserve G5 Performance].
    Anyway ... on a more serious note, a switch of sorts to x86 may not be a hugely insane idea. IBM's ability to produce a low power G5 part seems to be seriously in question, so for PowerBooks Apple is pretty much running out of options. Worse comes to worst - if they started selling x86-powered portables, that might get IBM to work a bit harder to get them faster desktop chips.
    -- "A Macintosh MPEG software developer"
  • christiansen89 - Monday, December 6, 2021 - link

    Guess there's no one arguing that the PPC is not keeping its pace with the current market, but rather OS/X able to do Big Iron computing. And if rumors are true, where will you be able to get a PPC built once Apple drops IBM for Intel? https://tvzyon.com/
  • aladdin0tw - Tuesday, June 14, 2005 - link

    This is my first time to see someone use 'ab' command to conduct a test, and trying to tell us something from the test.

    In my opinion, ab is never a 'stress test' tool for any reason, especially when you want to conclude some creditable benchmark from this test. If we can accept 'ab', why I have to code so much for a stress test?

    The 'localhost' is another problematic area, DNS. Why not using a fixed ip as an address? The first rule of benchmaking is isolated the domain in question, but I can not see you obey these rule. So how can you interpret your result as a performance faulty, not a dns related problem?

    I think you should benchmark again, and try some good practices used in software industry.

    Aladdin from Taiwan
  • demuynckr - Sunday, June 12, 2005 - link

    jhagman, the number in the apache test table means the request per second that the server handles.
  • jhagman - Wednesday, June 8, 2005 - link

    Hi again, demuynckr.

    Could you please answer to me, or preferably add the information to the article. What the does the number in the apache test table mean and what kind of a page was loaded?

    I assumed that the numbers given were hits per second or transfer rate. I've been testing a bit on my powerbook (although with a lower n) and I can very easily beat the numbers you have. So it is apparent that my assumption was wrong.

    BTW, gcc-3.3 on Tiger knows the switch -mcpu=G5
  • rubikcube - Wednesday, June 8, 2005 - link

    I thought I would post this set of benchmarks for os x on x86 vs. PPC. Even though XBench is a questionable benchmark, it still is capable of vindicating these questions about linux-ppc.

    http://www.macrumors.com/pages/2005/06/20050608063...
  • webflits - Wednesday, June 8, 2005 - link

    "Yes I have read the article, I also personally compiled the microbenchmarks on linux as well as on the PPC, and I can tell you I used gcc 3.3 on Mac for all compilation needs :)."

    I believe you :)

    But why my are results I get way higher than the numbers listed in the article?

Log in

Don't have an account? Sign up now