No more mysteries: Apple's G5 versus x86, Mac OS X versus Linuxby Johan De Gelas on June 3, 2005 7:48 AM EST
- Posted in
Mac OS X versus LinuxLmbench 2.04 provides a suite of micro benchmarks that measure the bottlenecks at the Unix operating system and CPU level. This makes it very suitable for testing the theory that Mac OS X might be the culprit for the terrible server performance of the Apple platform.
Signals allow processes (and thus threads) to interrupt other processes. In a database system such as MySQL 4.x where so many processes/threads (60 in our MySQL screenshot) and many accesses to the kernel must be managed, signal handling is a critical performance factor.
Larry McVoy (SGI) and Carl Staelin (HP):
" Lmbench measure both signal installation and signal dispatching in two separate loops, within the context of one process. It measures signal handling by installing a signal handler and then repeatedly sending itself the signal."
|Xeon 3.06 GHz||Linux 2.4||3056||0.42||0.63||4.47||5.58||18.2||0.68||2.33|
|G5 2.7 GHz||Darwin 8.1||2700||1.13||1.91||4.64||8.60||21.9||1.67||6.20|
|Xeon 3.6 GHz||Linux 2.6||3585||0.19||0.25||2.30||2.88||9.00||0.28||2.70|
|Opteron 850||Linux 2.6||2404||0.08||0.17||2.11||2.69||12.4||0.17||1.14|
All numbers are expressed in microseconds, lower is thus better. First of all, you can see that kernel 2.6 is in most cases a lot more efficient. Secondly, although this is not the most accurate benchmark, the message is clear: the foundation of Mac OS X server, Darwin handles the signals the slowest. In some cases, Darwin is even several times slower.
As we increase the level of concurrency in our database test, many threads must be created. The Unix process/thread creation is called "forking" as a copy of the calling process is made.
lmbench "fork" measures simple process creation by creating a process and immediately exiting the child process. The parent process waits for the child process to exit. The benchmark is intended to measure the overhead for creating a new thread of control, so it includes the fork and the exit time.
lmbench "exec" measures the time to create a completely new process, while " sh" measures to start a new process and run a little program via /bin/ sh (complicated new process creation).
|Xeon 3.06 GHz||Linux||3056||163||544||3021|
|G5 2.7 GHz||Darwin||2700||659||2308||4960|
|Xeon 3.6 GHz||Linux||3585||158||467||2688|
Mac OS X is incredibly slow, between 2 and 5(!) times slower, in creating new threads, as it doesn't use kernel threads, and has to go through extra layers (wrappers). No need to continue our search: the G5 might not be the fastest integer CPU on earth - its database performance is completely crippled by an asthmatic operating system that needs up to 5 times more time to handle and create threads.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Rosyna - Friday, June 3, 2005 - linkActually, for better or worse the GCC Apple includes is being used for most Mac OS X software. OS X itself was compiled with it.
elvisizer - Friday, June 3, 2005 - linkrosyna's right.
i'm just not sure if there IS anyway to do the kind of comparison you seem to've been shooting for (pure competition between the chips with as little else affecting the outcome as possible). you could use the 'special' compilers on each platform, but those aren't used for compiling most of the binaries you buy at compusa.
elvisizer - Friday, June 3, 2005 - linkwhy didn't you run some tests with YD linux on the g5?!?!?!?!?!?!? you could've answered the questions you posed yourself!!!!!
and you definitly should've included after effects. "we don't have access to that software" what the heck is THAT about?? you can get your hands on a dual 3.6 xeon machine, a dual 2.5 gr, and adual 2.7 g5, and you can't buy a freaking piece of adobe software at retail?!?!?!?!?!
some seroiusly weird decisions being made here.
other than that, the article was ok. re-confirmed suspicions i've had for awhile about OS X server handling large numbers of thread. My OS X servers ALWAYS tank hard with lots of open sessions, so i keep them around only for emergencies. They are so very easy to admin, tho, they're still attractive to me for small workgroup sizes. like last month, I had to support 8 people working on a daily magazine being published at e3. litterally inside the convention center. os x server was perfect in that situation.
Rosyna - Friday, June 3, 2005 - linkThere appears to be either a typo or a horrible flaw in the test. It says you used GCC 3.3.3 but OS X comes with gcc version 3.3 20030304 (Apple Computer, Inc. build 1809).
If you did use GCC 3.3.3 then you were giving the PPC a severe disadvantage as the stock GCC has almost no optimizations for PPC while it has many for x86.
Eug - Friday, June 3, 2005 - link"But do you really think that Oracle would migrate to this if it wasn't on a par?"
[Eug dons computer geek wannabe hat]
There are lots of reasons to migrate, and I'm sure absolute performance isn't always the primary concern. We won't know the real performance until we actually see tests on Oracle/Sybase.
My uneducated guess is that they won't be anywhere near as bad as the artifical server benches might suggest, but OTOH, I could easily see Linux on G5 significantly besting OS X on G5 for this type of stuff.
ie. The most interesting test I'd like to see is Oracle on the G5, with both OS X and Linux, compared to Xeon and Opteron with Linux.
And yeah, it would be interesting to see what gcc 4 brings to the table, since 3.3 provides no autovectorization at all. It would also be interesting to see how xlc/xlf does, although that doesn't provide autovectorization either. Where are the autovectorizing IBM compilers that were supposed to come out???
melgross - Friday, June 3, 2005 - linkAs none of us has actual experiance with this, none of us can say yes or no.
But do you really think that Oracle would migrate to this if it wasn't on a par? After all Ellison isn't on Apple's board anymore, so there's nothing to prove there.
I also remember that going back to Apple's G4 XServes, their performance was better than the x86 crowd, and the Sun servers as well. Those tests were on several sites. Been a while though.
JohanAnandtech - Friday, June 3, 2005 - linkquerymc: Yes, you are right. The --noaltivec flag and the comment that altivec was enabled by default in the gcc 3.3.3 compiler docs made me believe there is autovectorization (or at least "scalarisation"). As I wrote in the article we used -O2 and and then tried a bucket load of other options like --fast-math --mtune=G5 and others I don't remember anymore but it didn't make any big difference.
querymc - Friday, June 3, 2005 - linkThe SSE support would probably also be improved by using GCC 4 with autovectorization, I should note. There's a reason it does poorly in GCC 3. :)
querymc - Friday, June 3, 2005 - linkJohan: I didn't see this the first time through, but you need to make a slight clarification to the floating point stuff. There is no autovectorization capability in GCC 3.3. None. There is limited support for SSE, but that is not quite the same, as SSE isn't SIMD to the extent that AltiVec is. If you want to use the AltiVec unit in otherwise unaltered benchmarks, you don't have a choice other than GCC 4 (and you need to pass a special flag to turn it on).
Also, what compiler flags did you pass on each platform? For example, did you use --fast-math?
JohanAnandtech - Friday, June 3, 2005 - linkMelgross: Apple told me that most xserves in europe are sold as "do it all". A little webserver (apache), a database sybase, samba and so on. They didn't have any client who had heavy traffic on the webserver, so nobody complains.
Sybase/oracle seems to have done quite a bit of work to get good performance out of Mac OS-x, so it must be interesting to see how they managed to solve those problems. But I am sceptical that Oracle/Sybase runs faster on Mac OS x than on Linux.