Mac OS X versus Linux

Lmbench 2.04 provides a suite of micro benchmarks that measure the bottlenecks at the Unix operating system and CPU level. This makes it very suitable for testing the theory that Mac OS X might be the culprit for the terrible server performance of the Apple platform.

Signals allow processes (and thus threads) to interrupt other processes. In a database system such as MySQL 4.x where so many processes/threads (60 in our MySQL screenshot) and many accesses to the kernel must be managed, signal handling is a critical performance factor.

Larry McVoy (SGI) and Carl Staelin (HP):
" Lmbench measure both signal installation and signal dispatching in two separate loops, within the context of one process. It measures signal handling by installing a signal handler and then repeatedly sending itself the signal."
Host OS Mhz null null
call
open
I/O
stat slct
clos
sig
TCP
sig
inst
Xeon 3.06 GHz Linux 2.4 3056 0.42 0.63 4.47 5.58 18.2 0.68 2.33
G5 2.7 GHz Darwin 8.1 2700 1.13 1.91 4.64 8.60 21.9 1.67 6.20
Xeon 3.6 GHz Linux 2.6 3585 0.19 0.25 2.30 2.88 9.00 0.28 2.70
Opteron 850 Linux 2.6 2404 0.08 0.17 2.11 2.69 12.4 0.17 1.14

All numbers are expressed in microseconds, lower is thus better. First of all, you can see that kernel 2.6 is in most cases a lot more efficient. Secondly, although this is not the most accurate benchmark, the message is clear: the foundation of Mac OS X server, Darwin handles the signals the slowest. In some cases, Darwin is even several times slower.

As we increase the level of concurrency in our database test, many threads must be created. The Unix process/thread creation is called "forking" as a copy of the calling process is made.

lmbench "fork" measures simple process creation by creating a process and immediately exiting the child process. The parent process waits for the child process to exit. The benchmark is intended to measure the overhead for creating a new thread of control, so it includes the fork and the exit time.

lmbench "exec" measures the time to create a completely new process, while " sh" measures to start a new process and run a little program via /bin/ sh (complicated new process creation).

Host OS Mhz fork
hndl
exec
proc
Sh
proc
Xeon 3.06 GHz Linux 3056 163 544 3021
G5 2.7 GHz Darwin 2700 659 2308 4960
Xeon 3.6 GHz Linux 3585 158 467 2688
Opteron 850 Linux 2404 125 471 2393

Mac OS X is incredibly slow, between 2 and 5(!) times slower, in creating new threads, as it doesn't use kernel threads, and has to go through extra layers (wrappers). No need to continue our search: the G5 might not be the fastest integer CPU on earth - its database performance is completely crippled by an asthmatic operating system that needs up to 5 times more time to handle and create threads.

Mac OS X: beautiful but… Workstation, yes; Server, no.
Comments Locked

116 Comments

View All Comments

  • Joepublic2 - Monday, June 6, 2005 - link

    Wow, pixelglow, that's an awesome way to advertise your product. No marketing BS, just numbers!
  • pixelglow - Sunday, June 5, 2005 - link

    I've done a direct comparison of G5 vs. Pentium 4 here. The benchmark is cache-bound, minimal branching, maximal floating point and designed to minimize use of the underlying operating system. It is also single-threaded so there's no significant advantage to dual procs. More importantly it uses Altivec on G5 and SSE/SSE2 on the Pentium 4, and also compares against different compilers including the autovectorizing Intel ICC.

    http://www.pixelglow.com/stories/macstl-intel-auto...
    http://www.pixelglow.com/stories/pentium-vs-g5/

    Let the results speak for themselves.
  • webflits - Sunday, June 5, 2005 - link

    "From the numbers, it seems like gcc was only capable of using Altivec in one test,"

    Nonsense!
    The Altivec SIMD only supports single (32-bit) precision floating point and the benchmark uses double precision floating point.


  • webflits - Sunday, June 5, 2005 - link

    Here are the resuls on a dual 2.0Ghz G5 running 10.4.1 using the stock Apple gcc 4.0 compiler.



    [Session started at 2005-06-05 22:47:52 +0200.]

    FLOPS C Program (Double Precision), V2.0 18 Dec 1992

    Module Error RunTime MFLOPS
    (usec)
    1 4.0146e-13 0.0163 859.4752
    2 -1.4166e-13 0.0156 450.0935
    3 4.7184e-14 0.0075 2264.2656
    4 -1.2546e-13 0.0130 1152.8620
    5 -1.3800e-13 0.0276 1051.5730
    6 3.2374e-13 0.0180 1609.4871
    7 -8.4583e-11 0.0296 405.4409
    8 3.4855e-13 0.0200 1498.4641

    Iterations = 512000000
    NullTime (usec) = 0.0015
    MFLOPS(1) = 609.8307
    MFLOPS(2) = 756.9962
    MFLOPS(3) = 1105.8774
    MFLOPS(4) = 1554.0224
  • frfr - Sunday, June 5, 2005 - link

    If you test a database you have to disable the write cache on the disk on almost any OS unless you don't care about your data. I've read that OS X is an exception because it allows the database software control over it, and that mySql indeed does use this. This would invalidate al your mySql results except for OS X.

    Besides all serious database's run on controllers with write cache with batteries (and with the write cache on the disks disabled).

  • nicksay - Sunday, June 5, 2005 - link

    It is pretty clear that there are a lot of people who want Linux PPC benchmarks. I agree. I also think that if this is to be a "where should I position the G5/Mac OS X combination compared to x86/Linux/Windows" article, you should at least use the default OS X compiler. I got flops.c from http://home.iae.nl/users/mhx/flops.c to do my own test. I have a stock 10.4.1 install on a single 1.6 GHz G5.

    In the terminal, I ran:
    gcc -DUNIX -fast flops.c -o flops

    My results:

    FLOPS C Program (Double Precision), V2.0 18 Dec 1992

    Module Error RunTime MFLOPS
    (usec)
    1 4.0146e-13 0.0228 614.4905
    2 -1.4166e-13 0.0124 565.3013
    3 4.7184e-14 0.0087 1952.5703
    4 -1.2546e-13 0.0135 1109.5877
    5 -1.3800e-13 0.0383 757.4925
    6 3.2374e-13 0.0220 1320.3769
    7 -8.4583e-11 0.0393 305.1391
    8 3.4855e-13 0.0238 1258.5012

    Iterations = 512000000
    NullTime (usec) = 0.0002
    MFLOPS(1) = 736.3316
    MFLOPS(2) = 578.9129
    MFLOPS(3) = 866.8806
    MFLOPS(4) = 1337.7177


    A quick add-n-divide gives my system an average result of 985.43243.

    985. On a single 1.6 G5.

    So, the oldest, slowest PowerMac G5 ever made almost matches a top-of-the-line dual 2.7 G5 system?

    To quote, "Something is rotten in the state of Denmark." Or should I say the state of the benchmark?
  • Eug - Saturday, June 4, 2005 - link

    BTW, about the link I posted above:

    http://lists.apple.com/archives/darwin-dev/2005/Fe...

    The guy who wrote that is the creator of the BeOS file system (and who now works for Apple).

    It will be interesting to see if this is truly part of the cause of the performance issues.

    Also, there is this related thread from a few weeks back on Slashdot:

    http://hardware.slashdot.org/article.pl?sid=05/05/...
  • profchaos - Saturday, June 4, 2005 - link

    The statement about Linux kernel modules is incorrect. It is a popular misconception that kernel modules make the Linux kernel something other than purely monolithic. The module loader links module code in kernelspace, not in userspace, the advantage being dynamic control of kernel memory footprint. Although some previously kernelspace subsystems, such as devfs, have been recently rewritten as userspace daemons, such as udev, the Linux kernel is for the most part a fully monolithic design. The theories that fueled the monolithic vs. microkernel flame wars of the mid-90s were nullified by the rapid ramping of single-thread performance relative to memory subsystems. From the perspective of the CPU, it take years for a context switch to occur since modifying kernel data structures in main memory is so slow relative to anything else. Userspace context switching is based on IPC in microkernel designs, and may require several context switches in practice. As you can see from the results, Linux 2.6 wipes the floor with Darwin just the same as it does with several of the BSDs (especially OpenBSD and FreeBSD4.x) and its older cousin Linux 2.4. It's also anyone's guess whether the Linux 2.6 systems were using pthreads (from NPTL) or linuxthreads in glibc. It takes a heavyweight UNIX server system, which today means IBM AIX on POWER, HP-UX on Itanium, or to a lesser degree Solaris on SPARC, to best Linux 2.6 under most server workloads.
  • Eug - Saturday, June 4, 2005 - link

    Responses/Musings from an Apple developer.

    http://ridiculousfish.com/blog/?p=17
    http://lists.apple.com/archives/darwin-dev/2005/Fe...

    Also:

    They claim that making a new thread is called "forking". No, it’s not. Calling fork() is forking, and fork() makes processes, not threads.

    They claim that Mac OS X is slower at making threads by benchmarking fork() and exec(). I don’t follow this train of thought at all. Making a new process is substantially different from making a new thread, less so on Linux, but very much so on OS X. And, as you can see from their screenshot, there is one mySQL process with 60 threads; neither fork() nor exec() is being called here.

    They claim that OS X does not use kernel threads to implement user threads. But of course it does - see for yourself.
    /* Create the Mach thread for this thread */
    PTHREAD_MACH_CALL(thread_create(mach_task_self(), &kernel_thread), kern_res);

    They claim that OS X has to go through "extra layers" and "several threading wrappers" to create a thread. But anyone can see in that source file that a pthread maps pretty directly to a Mach thread, so I’m clueless as to what "extra layers" they’re talking about.

    They guess a lot about the important performance factors, but they never actually profile mySQL. Why not?
  • orenb - Saturday, June 4, 2005 - link

    Thank you for a very interesting article. A follow up on desktop and workstation performance will be very appreciated... :-)

    Good job!

Log in

Don't have an account? Sign up now