PCIe 3.0 vs. PCIe 2.0

As part of our testing on the X79 Extreme11, we decided to test both PCIe 2.0 and PCIe 3.0 scenarios.  Due to the PLX chips onboard giving us a full x16/x16/x16/x16 minus any PLX latency, it should give a rough idea of how these two technologies perform.  Our testing incorporated each benchmark at 2560x1440 using full eye candy settings.  Here are our results, indicated by percentage difference of PCIe 3.0 over PCIe 2.0:

PCIe 3.0 vs. PCIe 2.0
2560x1440, Full AA/AF
ASRock X79 Extreme11
x16/x16/x16/x16
  Metro 2033 Dirt3
1x 7970 -0.3% +3.8%
2x 7970 +2.6% +4.3%
3x 7970 +1.2% +4.2%
4x 7970 +1.9% +0.5%

As we can see, there is an improvement for Dirt3 and Metro2033, though the difference is barely noticable. The effect of PCIe 3.0 depends on the different engines using DirectX and OpenGL – each system, and thus each gaming engine, uses the PCIe bus differently.  In the games where the PCIe bus is used extensively, then PCIe 3.0 will win out.  Otherwise we are at the whim of statistical variation between runs.

Dirt 3

Dirt 3 is a rallying video game and the third in the Dirt series of the Colin McRae Rally series, developed and published by Codemasters.  Using the in game benchmark, Dirt 3 is run at 2560x1440 with full graphical settings.  Results are reported as the average frame rate across four runs.

Dirt 3 - One 7972

Dirt 3 - Two 7972

Dirt 3 - Three 7972

Dirt 3 - Four 7972

Due to the PLX chips, we would expect the X79 Extreme11 to fall behind slightly in single and dual GPU performance, which is confirmed in the benchmark results.  In four-way GPU however, the X79 board falls behind some Z77 boards.

Dirt 3 - One 580

Dirt 3 - Two 580

Using NVIDIA GPUs, Dirt3 is still agnostic to any CPU or PCIe performance.

Metro2033

Metro2033 is a DX11 benchmark that challenges every system that tries to run it at any high-end settings.  Developed by 4A Games and released in March 2010, we use the inbuilt DirectX 11 Frontline benchmark to test the hardware at 2560x1440 with full graphical settings.  Results are given as the average frame rate from 4 runs.

Metro2033 - One 7972

Metro2033 - Two 7972

Metro2033 - Three 7972

Metro2033 - Four 7972

Metro 2033 mirrors similar findings from Dirt3 - the ASRock cannot keep pace with the other boards.  This must suggest that having dual PLX chips offers a much bigger hit to frame rates than previously thought.

Metro2033 - One 580

Metro2033 - Two 580

Computation Benchmarks Testing the LSI SAS 2308 PCIe Controller
Comments Locked

62 Comments

View All Comments

  • Azethoth - Monday, September 3, 2012 - link

    "a SAS". "an" is for words starting with vowels like "an error", "a" is for words starting with consonants like "a Serial Attached SCSI" or "a Storage Area Network" or "a SAS"*. It rolls off the tongue better when you don't have adjacent vowels.

    *Your particular English implementation may have different rules, these were the ones I grew up with. I find them simple and easy to apply.
  • lukarak - Tuesday, September 4, 2012 - link

    That's not entirely true.

    It would be an 'a' if you read it as 'a sas'. But with SAS, we usually pronounce it as S A S, and then it goes with 'an'.
  • ahar - Tuesday, September 4, 2012 - link

    Who's "we"? It doesn't include me. Why use three syllables when one will do?
    Do you also talk about R A M, or R A I D arrays, or an L A N?
  • Death666Angel - Tuesday, September 4, 2012 - link

    Like lukarak said, that is not true. The English language uses "an", when the word following it starts with a vowel sound. That doesn't necessarily mean it has a vowel as the first character (see hour).

    As for abbreviations, there is no rule for it. Some people pronounce them like a single word, others don't. I use LAN, RAM, RAID as a word, but pronounce SAS as S.A.S. and SATA as S.ATA for example and SNES as S.NES. You can't appease both groups. So I think the writer of the article should go with whatever he feels most comfortable with, so that he avoids flipping between things unconsciously.
  • Death666Angel - Monday, September 3, 2012 - link

    "If you believe the leaks/news online about an upcoming single slot GTX670, or want to purchase several single slot FirePro cards, then the ASRock will give you all that bandwidth as long as the user handles the heat."
    I'd probably slap some water coolers on there. Insane setup :D.
  • tynopik - Monday, September 3, 2012 - link

    Is it even confirmed that this Ivy Bridge-E is coming out?
  • shunya901 - Monday, September 3, 2012 - link


    ..............\.............\....http://www.frankfushi.com/
    commonprosperity.org@hotmail.com
    == ( http://commonprosperity.org )==
    you can find many cheap and fashion stuff
    jordan air max oakland raiders $30–39;
    Ed Hardy AF JUICY POLO Bikini $20;
    Handbags (Coach lv fendi d&g) $30
    T shirts (Polo ,edhardy,lacoste) $15
    Jean(True Religion,edhardy,coogi) $30
    Sunglasses (Oakey,coach,gucci,Armaini) $15
    New era cap $15
  • ypsylon - Tuesday, September 4, 2012 - link

    But little is delivered.

    1. Primitive RAID option. Without even small cache it is as useful as Intel Storage Matrix RAID. Of course for R 1/10 parity calculations are not required so lack of XOR chip isn't an issue, but believe me even 128 MB of cache would improve performance greatly.
    2. They bolted 8 SATA/SAS ports to the board instead using standard server oriented SFF-8087 connector. You get one cable running 4 drives not 4 separate cables for each separate drive. Very clumsy solution. And very, very cheap. Exactly what I expect of ASR.
    3. If someone wants RAID buy a proper hardware controller, even for simple setups of R1/10 - plenty of choice on the market. When you change the board in the future you just unplug controller from old board and plug it into new one. No configuration is needed, all arrays remain the same. Idea of running RAID off the motherboard is truly hilarious, especially if somebody change boards every year or two.
    4. Fan on south bridge (or the only bridge as north bridge is in the CPU now? ;) ). Have mercy!
    5. They pretend it is WS oriented board yet they equip it with lame Broadcom NICs. Completely clueless, that kind of inept reasoning is really typical of ASRock.
    6.And finally why persist with ATX. At least E-ATX would be better choice. Spacing some elements wouldn't hurt. Especially with 7 full PCI-Ex slots. Impossible to replace RAM when top slot is occupied, and with really big VGAs it really is tight squeeze between CPU, RAM and VGA. Why not drop top slot to allow air to circulate. Without proper cooling in the case there will be a pocket of hot air which will never move.

    To sum up. Bloody expensive, dumb implementation of certain things, and cheaply made. Like 99% of ASRock products. Cheap Chinese fake dressed like Rolls-Royce. In short:stay away.
  • dgingeri - Tuesday, September 4, 2012 - link

    1. Many server manufacturers equip their small business servers with a low end chip like that because of cost. Small businesses, like those who would build their own workstation class machines, have to deal with a limited budget. This works for this market space.

    2. I don't see any sign of a SFF-8087 port or cable. I see only SATA ports. Honestly, I would have preferred a SFF-8087 port/cable, as my Dell H200 in my Poweredge T110 II uses. It would take up less real estate on the board and be more manageable. I know this from experience.

    3. Yeah, the Dell H200 (or it's replacement H310) has plenty of ports (8) and runs <$200 yet any hardware raid controller with a cache would run $400 for 4 ports or about $600 for 8. (I have a 3ware 9750 in my main machine that ran me $600.) Depending on your target market, cost could matter. They get what they can with the budget they have.

    4. I'd have to agree with you on the fan, but there's also the little matter of keeping clearance for the video cards top populate the slots. Take off the decorative plate and make the heatsink bigger, and they could probably do without the fan. Unfortunately, there are lots of stupid people out there who buy things on looks rather than capability.

    5. Broadcom NICs are vastly superior to the Realtek or Atheros NICs we usually see on DIY boards. I would be happier to see Intel NICs, but Broadcom is still the second best on the market. I have 2 dual port Broadcom NICs in my Dell T110 II machine (which I use as a VMWare ESXI box to train up for certification and my home server.)They work quite well, as long as you don't use link aggregation.

    6. Many people wouldn't be able to afford a case that would handle E-ATX, especially the target market for this board.

    For the target market, DIY part time IT guy for a small business trying to make a decent CAD station or graphics workstation, it would work fairly well. I'm just not sure about the reliability factor, which would cost a small business big time. I'd say stay away just on that factor. Do with a little less speed and more reliability if you want to stay in business. Dell makes some nice IB workstations that would be perfectly reliable, but wouldn't be as speedy as a SB-E machine.
  • 08solsticegxp - Sunday, June 9, 2013 - link

    You have to realize, this board is not a server board. If it was designed for that, I'm sure they would have two sockets. Also, it is much cheaper to add the LSI chip to the board than have it as an add-on card. If it was an add-on card... where do you expect it to go when using 4 Video cards?
    I think the board is designed very well for what it was intended for. You may want to consider looking at design as it relates to the intended purpose... Not, some other purpose.

    I will agree to say I would have liked to see a Raid 5 option on the RAID controller. However, looking at the price of an LSI (who are noted for being a high quality RAID controller) it is pretty pricey when you start getting to the controllers that have RAID 5 as an option.

Log in

Don't have an account? Sign up now