Compiling Benchmarks

We get a lot of requests to show some compiling benchmarks. Those playing Gentoo at home should be paying particular attention to this portion of the benchmark. We took the standard Linux 2.6.4 release from kernel.org and compiled it under our 32-bit test bed. We did not cross-platform compile for simplicity, so we are only looking at the 32-bit vanilla kernel. We used the commands as below.
# yes "" | make config
# time make

32-bit Linux 2.6.4 Kernel Compile

We are greeted with a nice slow performance curve. We have the slower 3.4GHz P4EE overtake its faster 90nm counterpart on several occasions now - a definite trend has set in. Below, you can see how the Intel processors favored when we enabled HyperThreading.

HyperThreading: 32-bit Linux 2.6.4 Kernel Compile

Keep in mind make is not actually threading, we are just determining what kind of a performance hit occurs by enabling the two virtual processors rather than keeping just one active. Obviously running two applications at once will receive performance benefits. The unfortunate fact remains that workstation software continues to remain largely linear. We receive some benfiit by running multiple applications at the same time, like rendering a file and playing an MP3, but there are very few Linux workstation programs that fully utilize multiple threads. Fortunately, we have an article coming up that deals with just how to receive the best performance out of multiple threads (and HyperThreading/SMP configurations) in the works.

We two additional tests with the Prescott processors calculating the time to make the kernel while forcing make to run parallel jobs.

HyperThreading: 32-bit Linux 2.6.4 Kernel Compile


HyperThreading: 32-bit Linux 2.6.4 Kernel Compile


It is very easy to use make -j* incorrectly. There are small perecentage gains by using make -j3 over make -j2 using HyperThreading.
Chess Benchmarks Synthetic Benchmarks
Comments Locked

33 Comments

View All Comments

  • Aquila76 - Monday, September 20, 2004 - link

    It's happened again: the cheapest AMD64 939 processor is the better choice of the processors reviewed here for price/performance. Granted the Linux market isn't a huge chunk of the market for either company, but among those who do use Linux there is no reason to use anything but AMD.
  • fitten - Monday, September 20, 2004 - link

    It would be interesting to see these tests run on several S754 boxes as well. I know this one was entitled "cutting edge performance" but with the cost difference between the S939 solutions and the S754 solutions, many will opt (as I have) to go with the S754 parts. I run SuSE 9.1 Professional AMD64 on an Athlon 64 3000+ and have been pleased with it.
  • Aquila76 - Monday, September 20, 2004 - link

    This may be a little off-topic, but shows yet another reason California hates MS. This is what happens when you move to Windows...

    http://software.silicon.com/applications/0,3902465...

    Love the caption: Someone forgot to reboot
  • fitten - Monday, September 20, 2004 - link

    "Our processor test bed is completely caseless, and if we have issuse with our 3.6GHz processor out of a normal case, we can't imagine what issues might exist in a full enclosure. "

    Todays processors ("today" meaning back to the days of the original Pentium) typically run hotter with the case open than with it closed. With the case closed, cooler air is forced through the case and across the right places to help lower component temperatures, assuming that the fans are placed appropriately, are venting in the right direction, and cables/components are arranged to allow the air to flow properly.

    Newer form factors, such as Intel's BTX, are designed in a large part to cope with cooling the thing down.

    You should be running your test bed in a case with appropriate cooling.
  • Araemo - Monday, September 20, 2004 - link

    A note about the compilation benchmark:

    Make and gcc themselves are not multi-threaded, but make does understand a -j option to specify the # of concurrent make jobs to run. The general rule-of-thumb I have heard, is to specify a -j equal to n+1, where n is the number of processors in the system(One compile job per processor, and one control job.) So, to test if hyperthreading lowers the compile time, specifiy `time make -j 3`.

    I would personally like to see this, in addition to the single-threaded results. I read some quick-and-dirty benchmark results suggesting hyperthreading does help the compile time, but those results were published by curious people when hyperthreading was new.. I haven't seen any results on recent processors.
  • garfield - Monday, September 20, 2004 - link

    Isn't it normal procedure for a P4 that's getting too hot that it throttles the clock speed down? Maybe that would explain the extremely bad results, though it seems a bit unrealistic that the results has been reproducible, unless each iteration of the benchmark has been run for a long time.
  • ceefka - Monday, September 20, 2004 - link

    First of all : good article. It really shows the early benefits of well written 64-bit software. The 3500+ is definately on my wishlist ;-)

    "This does not bode well for the processor. Our processor test bed is completely caseless, and if we have issues with our 3.6GHz processor out of a normal case, we can't imagine what issues might exist in a full enclosure."

    Rather confusing this bit, Kristopher. Anyway, I read that a good case will offer convection which apparently a caseless testbed does not. How were the tempratures of the other CPU's? Were they also a tad above average or typical peak?
  • balzi - Monday, September 20, 2004 - link

    Some thoughts --

    can we please have Graphs where the order of the legend is the same as the order of the bars in the bar-graph. Surely that's possible.

    also, the strange "Thermal issue" error. It seems that you thought it was weird but immediately assumed that it was correct; that the Intel CPU(s) were getting too hot.

    Did you verify this somehow?? It seems strange to call it "An unusual problem" and then trust that it's correct without question or explanation.

    Thanks
  • Shinei - Monday, September 20, 2004 - link

    "This does not bode well for the processor. Our processor test bed is completely caseless, and if we have issuse with our 3.6GHz processor out of a normal case, we can't imagine what issues might exist in a full enclosure."
    That quote made me laugh, and I'm not entirely sure why. :D

    Anyway, I see that going to 64-bit is definitely worth the price of admission, considering the huge gains the processors get in the jump. One thing I had a question on, though: Why does the result from the SSL benchmark halve between 32-bit and 64-bit? Is it that the keys are longer in 64-bit?
  • gherald - Monday, September 20, 2004 - link

    I too would prefer longer images that include both 32 and 64-bit results. Mouseover comparisons are cumbersome.

    Is sample.wav 800mb or 700mb? I'm guessing the 7 was probably just a typo.

    Nice analysis of DDR1 vs 2.

    My only gripe is I wish a full complement of "lower end" processors were included in all these benchmarks (754s, slower prescotts, and heck even northwoods)... but I guess that'd be too much work.

Log in

Don't have an account? Sign up now