Workstation, yes; Server, no.

The G5 is a gigantic improvement over the previous CPU in the PowerMac, the G4e. The G5 is one of the most superscalar CPUs ever, and has all the characteristics that could give Apple the edge, especially now that the clock speed race between AMD and Intel is over. However, there is still a lot of work to be done.

First of all, the G5 needs a lower latency access to the memory because right now, the integer performance of the G5 leaves a lot to be desired. The Opteron and Xeon have a better integer engine, and especially the Pentium 4/Xeon has a better Branch predictor too. The Opteron's memory subsystem runs circles around the G5's.

Secondly, it is clear that the G5 FP performance, despite its access to 32 architectural registers, needs good optimisation. Only one of our flops tests was " Altivectorized", which means that the GCC compiler needs to improve quite a bit before it can turn those many open source programs into super fast applications on the Mac. In contrast, the Intel compiler can vectorize all 8 tests.

Altivec or the velocity engine can make the G5 shine in workstation applications. A good example is Lightwave where the G5 takes on the best x86 competition in some situations, and remains behind in others.

The future looks promising in the workstation market for Apple, as the G5 has a lot of unused potential and the increasing market share of the Power Mac should tempt developers to put a little more effort in Mac optimisation.

The server performance of the Apple platform is, however, catastrophic. When we asked Apple for a reaction, they told us that some database vendors, Sybase and Oracle, have found a way around the threading problems. We'll try Sybase later, but frankly, we are very sceptical. The whole "multi-threaded Mach microkernel trapped inside a monolithic FreeBSD cocoon with several threading wrappers and coarse-grained threading access to the kernel", with a "backwards compatibility" millstone around its neck sounds like a bad fusion recipe for performance.

Workstation apps will hardly mind, but the performance of server applications depends greatly on the threading, signalling and locking engine. I am no operating system expert, but with the data that we have today, I think that a PowerPC optimised Linux such as Yellow Dog is a better idea for the Xserve than Mac OS X server.

References

Threading on OS X
http://developer.apple.com/technotes/tn/tn2028.html

Basics OS X
http://developer.apple.com/documentation/macosx/index.html


Mac OS X versus Linux
POST A COMMENT

112 Comments

View All Comments

  • edchi - Tuesday, June 26, 2007 - link


    I haven't tried this yet, but will do tomorrow. Here is what Apple suggests to create a better MySQL installation:

    http://docs.info.apple.com/article.html?artnum=303...">http://docs.info.apple.com/article.html?artnum=303...
    Reply
  • grantma - Tuesday, April 18, 2006 - link

    I found Gnome was a lot more snappy than OS X desktop under Debian PowerPC. You could tell the kernel was far faster using Linux 2.6 - programs would just start immediately. Reply
  • pecosbill - Wednesday, June 15, 2005 - link

    I'm not going to waste my time searching to see if these same comments below were made already, but the summary of them is those who are performance oriented tune their code for a CPU. You can do the same for an OS. Also, the "Big Mac" cluster in VA tech speaks otherwise to raw performance as OS X was the OS of choice. From macintouch.com:

    Okay, stop, I have to make an argument about why this article fails, before I explode. MySQL has a disgusting tendency to fork() at random moments, which is bad for performance essentially everywhere but Linux. OS X server includes a version of MySQL that doesn't have this issue.
    No real arguments that Power Macs are somewhat behind the times on memory latency, but that's because they're still using PC3200 DDR1 memory from 2003. AMD/Intel chips use DDR2 or Rambus now ... this could be solved without switching CPUs.
    The article also goes out of its way to get bad results for PPC. Why are they using an old version of GCC (3.3.x has no autovectorization, much worse performance on non-x86 platforms), then a brand spanking new version of mySQL (see above)? The floating point benchmark was particularly absurd:

    "The results are quite interesting. First of all, the gcc compiler isn't very good in vectorizing. With vectorizing, we mean generating SIMD (SSE, Altivec) code. From the numbers, it seems like gcc was only capable of using Altivec in one test, the third one. In this test, the G5 really shows superiority compared to the Opteron and especially the Xeons"

    In fact, gcc 3.3 is unable to generate AltiVec code ANYWHERE, except on x86 where they added a special SSE mode because x87 floating point is so miserable. This could have been discovered with about 5 minutes of Google research. It wouldn't had to have been discovered at all if they hadn't gone out of their way to use a compiler which is the non-default on OS X 10.4. Alarm bells should have been going off in the benchmarkers head when an AMD chips outperforms an Intel one by 3x, but, anyway ...
    I hate to seem like I'm just blindly defending Apple here, but this article seems to have been written with an agenda. There's no way one guy could stuff this much stuff up. To claim there's something inherently wrong with OS X's ability to be a server is going against so much publicly available information it's not even funny. Notice Apple seems to have no trouble getting Apache to run with Linux-like performance: [Xserve G5 Performance].
    Anyway ... on a more serious note, a switch of sorts to x86 may not be a hugely insane idea. IBM's ability to produce a low power G5 part seems to be seriously in question, so for PowerBooks Apple is pretty much running out of options. Worse comes to worst - if they started selling x86-powered portables, that might get IBM to work a bit harder to get them faster desktop chips.
    -- "A Macintosh MPEG software developer"
    Reply
  • aladdin0tw - Tuesday, June 14, 2005 - link

    This is my first time to see someone use 'ab' command to conduct a test, and trying to tell us something from the test.

    In my opinion, ab is never a 'stress test' tool for any reason, especially when you want to conclude some creditable benchmark from this test. If we can accept 'ab', why I have to code so much for a stress test?

    The 'localhost' is another problematic area, DNS. Why not using a fixed ip as an address? The first rule of benchmaking is isolated the domain in question, but I can not see you obey these rule. So how can you interpret your result as a performance faulty, not a dns related problem?

    I think you should benchmark again, and try some good practices used in software industry.

    Aladdin from Taiwan
    Reply
  • demuynckr - Sunday, June 12, 2005 - link

    jhagman, the number in the apache test table means the request per second that the server handles. Reply
  • jhagman - Wednesday, June 08, 2005 - link

    Hi again, demuynckr.

    Could you please answer to me, or preferably add the information to the article. What the does the number in the apache test table mean and what kind of a page was loaded?

    I assumed that the numbers given were hits per second or transfer rate. I've been testing a bit on my powerbook (although with a lower n) and I can very easily beat the numbers you have. So it is apparent that my assumption was wrong.

    BTW, gcc-3.3 on Tiger knows the switch -mcpu=G5
    Reply
  • rubikcube - Wednesday, June 08, 2005 - link

    I thought I would post this set of benchmarks for os x on x86 vs. PPC. Even though XBench is a questionable benchmark, it still is capable of vindicating these questions about linux-ppc.

    http://www.macrumors.com/pages/2005/06/20050608063...
    Reply
  • webflits - Wednesday, June 08, 2005 - link

    "Yes I have read the article, I also personally compiled the microbenchmarks on linux as well as on the PPC, and I can tell you I used gcc 3.3 on Mac for all compilation needs :)."

    I believe you :)

    But why my are results I get way higher than the numbers listed in the article?
    Reply
  • mongo lloyd - Tuesday, June 07, 2005 - link

    At least the non-ECC RAM, that is. Reply
  • mongo lloyd - Tuesday, June 07, 2005 - link

    Any reason for why you weren't using RAM with lower timings on the x86 processors? Shouldn't there at least have been a disclaimer? Reply

Log in

Don't have an account? Sign up now