DivX 8.5.3 with Xmpeg 5.0.3

Our DivX test is the same DivX / XMpeg 5.03 test we've run for the past few years now, the 1080p source file is encoded using the unconstrained DivX profile, quality/performance is set balanced at 5 and enhanced multithreading is enabled:

DivX 6.8.5 w/ Xmpeg 5.0.3 - MPEG-2 to DivX Transcode

And we're done. DivX, historically a stronghold for AMD's Phenom II processors (at least compared to their price-competitive Penryn counterparts) is faster on the Core i5 750 than on the Phenom II X4 965 BE. What's wrong with that?

The i5 750 costs $199, the 965 BE costs $245. Intel is selling you more transistors for less than AMD is for once.

x264 HD Video Encoding Performance

Graysky's x264 HD test uses the publicly available x264 codec (open source alternative to H.264) to encode a 4Mbps 720p MPEG-2 source. The focus here is on quality rather than speed, thus the benchmark uses a 2-pass encode and reports the average frame rate in each pass.

x264 HD Encode Benchmark - 720p MPEG-2 to x264 Transcode

In the first pass AMD is quite competitive, outpacing the i5 750, but when we get to the actual encode:

x264 HD Encode Benchmark - 720p MPEG-2 to x264 Transcode

It's close, but the cheaper i5 750 is faster than the Phenom II X4 965 BE once again; Hyper Threading keeps the i7 920 ahead.

 

Windows Media Encoder 9 x64 Advanced Profile

In order to be codec agnostic we've got a Windows Media Encoder benchmark looking at the same sort of thing we've been doing in the DivX and x264 tests, but using WME instead.

Windows Media Encoder 9 x64 - Advanced Profile Transcode

AMD is about 6% faster than the i5 750 here, it looks like the Phenom II does have some hope left for it. Let's see how the rest unfolds...

Adobe Photoshop CS4 Performance 3D Rendering Performance
Comments Locked

343 Comments

View All Comments

  • strikeback03 - Tuesday, September 8, 2009 - link

    How would you have graphics then? You would be limited to the 4xPCIe off the P55 on motherboards which support it, as there are no integrated graphics (yet)
  • MX5RX7 - Tuesday, September 8, 2009 - link

    I'm not sure that CPU/GPU integration is a good thing, from a consumer standpoint. At least in the short term.

    For example, in the article you mention how the majority of modern games are GPU, not CPU limited. The current model allows us to purchase a very capable processor and pair it with a very capable GPU. Then, when the ultra competitive GPU market has provided us with a choice of parts that easily eclipse the performance of the previous generation, we either swap graphics cards for the newer model, or purchase a second now cheaper identical card and (hopefully) double our game performance with SLI or Crossfire. All without having to upgrade the rest of the platform.

    With the current model, a new graphics API requires a new graphics card. With Larrabee, it might very well require a whole new platform.

  • Ben90 - Tuesday, September 8, 2009 - link

    Yea, im really excited for Larrabee, who knows if it will be good or not... but with intel kicking ass in everything else, it will at least be interesting

    With overclocking performance seemingly being limited by the PCI-E controller, it seems like an unlocked 1156 would be pretty sweet

    All in all i gotta admit i was kinda bitter with this whole 1156 thing because i jumped on the 1336 bandwagon and it seemed that Intel was mostly just jacking off with the new socket... but this processor seems to bring a lot more innovation than i expected (just not in raw performance, still great performance though)
  • chizow - Tuesday, September 8, 2009 - link

    Was worried no one was going to properly address one of the main differences between P55 and X58, thanks for giving it a dedicated comparison. Although I would've like to have seen more games tested, it clearly indicates PCIE bandwidth becoming an issue with current generation GPUs. This will only get worst with the impending launch of RV8x0 and GT300.
  • Anand Lal Shimpi - Tuesday, September 8, 2009 - link

    PCIe bandwidth on Lynnfield is only an issue with two GPUs, with one you get the same 16 lanes as you would on X58 or AMD 790FX.

    If I had more time I would've done more games, I just wanted to focus on those that I knew scaled the best to see what the worst case scenario would be for Lynnfield.

    In the end 2 GPUs are passable (although not always ideal on Lynnfield), but 4 GPUs are out of the question.

    Take care,
    Anand
  • JumpingJack - Thursday, September 10, 2009 - link

    Anand, a few other sites have attempted SLI/Xfire work ... on in particular shows 4 GPUs having no impact at all on gaming performance in general -- well, 3 or 4 FPS, but nothing more than a few percentages over norm.

    Could your configuration with beta or just bad first release drivers be an issue?

    Jack
  • JonnyDough - Tuesday, September 8, 2009 - link

    Would it be possible to incorporate two GPU controllers onto a die instead of one or is that what they'll be doing with future procs? I would think that two controllers with a communication hub might supply the needed bandwidth of x16 + x16.
  • Comdrpopnfresh - Tuesday, September 8, 2009 - link

    with two gpu's being passable- do you foresee that applying to both two independent gpus, as well as the single dual-card gpus?
  • Ryan Smith - Tuesday, September 8, 2009 - link

    Yes. The only difference between the two is where the PCIe bridge chip is. In the former it's on the mobo, in the latter it's on the card itself.
  • Eeqmcsq - Tuesday, September 8, 2009 - link

    Talk about bringing a bazooka to a knife fight. AMD better be throwing all their innovation ideas and the kitchen sink into Bulldozer, because Intel is thoroughly out-innovating AMD right now.

Log in

Don't have an account? Sign up now