Initial Thoughts on 3DMark “2013”

First, let me say that while I understand the reasoning behind eliminating the year/version from the name, I’m going to generally refer to this release as 3DMark 2013, as there will inevitably be another 3DMark in a year or two. With that out of the way, how does this latest release stand up to previous iterations, and is it a useful addition to the benchmark repertoire?

No benchmark is ever perfect, and even “real world gaming benchmarks” can only tell part of the story. As long as we keep that thought forefront when looking at the latest 3DMark, the results are completely reasonable. With the overall scores using both the Graphics and Physics tests, it will always be beneficial to have a fast CPU and GPU working together for 3DMark rather than pairing a fast GPU with a mediocre CPU, but I can’t say that such an approach is wrong—no matter what some companies might try to say, there are always potential uses for more CPU power in games (physics and AI immediately come to mind), though not every game will need a ton of CPU performance.

In terms of advancing the state of the benchmarking industry, it’s good to see the demo modes (cool graphics with sound are more enticing to the average person than a pure graphics benchmark). I also like the addition of graphs that show performance, power, temperatures, etc., though I wish they worked on all of the hardware rather than only some of the platforms. There’s at least the potential to now use 3DMark on its own to do stress testing without running additional utilities (HWiNFO or similar) in the background.

What I want to see now is how the various tablet and smartphone offerings stack up in comparison to the laptops that I’ve tested. Some people have mused that ARM and the latest SoCs are going to kill off the low end laptop market, but we’re still a ways from that happening, at least from a performance perspective. As slow as HD 3000 can be in comparison to other discrete GPUs, it’s probably still faster than any of the currently shipping SoC GPUs, and HD 4000 is another 50-100% faster than HD 3000. They both also use far more power, but when an iPad 4 includes a battery that holds as much power as many budget laptops, we’re not exactly talking about an insurmountable gulf.

What I really wish we had was more than one of the three tests to run on SoCs. Fire Strike is obviously too much for even notebook GPUs right now, but Cloud Gate ought to be able to run on the better SoCs. Ice Storm on the other hand is running at frame rates over 1000 on a high-end desktop GPU, so if that’s the only point of comparison with the SoCs we’re missing quite a bit of detail. Regardless, it will be nice to have another cross-platform benchmark where we can gauge relative performance, and that looks to be exactly what 3DMark provides.

Initial 3DMark Notebook Results
Comments Locked

69 Comments

View All Comments

  • mindstorm - Wednesday, February 6, 2013 - link

    Result for my Dell M6700 with a FirePro M6000 2GB ram. Test with 13.2 beta 4.

    http://www.3dmark.com/3dm/63286?

    Ice Storm
    Overal: 92056
    Graphics Score 128702
    Physics Score 46107

    Cloudgate
    Overall: 12160
    Graphics Score 15605
    Physics Score 6860

    Firestrike
    Overal: 2241
    Graphics Score 2353
    Physics Score 9565
    Combined Score 896

    Better performance on the Ice Storm test then the other notebooks tested. Cloudgate 2nd (of notebooks) and firestrike 3th of notebooks.
    All seem to be due to a low graphics score. I suppose this is due to the fact that the M6000 is optimized fore business apps. Although I wonder how much drivers could be of an isue since all the versions I have been trying so far havn't been very stable. 13.1 was a bit more stable then 13.2 overclocking is out of the option with 13.2 beta 4.
  • Notmyusualid - Wednesday, February 6, 2013 - link

    M18x R2.

    i7 3920XM, 16GB CAS10 RAM.

    Left number is your M17x R4's results, middle is the my M18x R2 on ONE 7970M card, right is both 7970Ms in Crossfire (reduced cpu multipliers).:

    Ice Storm:
    85072 / 146723 / 157182
    116561 / 273526 / 343640
    43727 / 55947 / 54218

    Cloud Gate:
    16729 / 18157/ 24279
    30624 / 32362 / 65783
    6491 / 7159 / 7568

    Fire Strike:
    4332 / 4467 / 7647
    4696 / 4807 / 9542
    8910 / 10467 /10449
    Also: COMBINED score: 2699

    No GPU overclocking.

    Operating System is Win 8 (Bloatware) x64, and can't reverse install, and too lazy to restore backup.
    Graphics drivers are AMD_Catalyst_13.2_Beta5. (13.1 didn't install at all, 13.2 completed only once). No CAP file for this one yet I don't think.

    Crossfire working, but Intel's XTU informs me I was 'throttling', (somewhat) new to me. I game all day on multipliers of 4 cores @ x44, but this benchmark seems to push the system harder.
    XTU reports 3% throttling over the benchmark period @ 4cores x44.
    I dropped to x44,43,42,41 on beta 5 to allow it to complete the benchmark.

    Oddly, Super Pi completes two 32M runs back to back without error on x44,44,44,44. (and it does it a whole 30s slower than on Win 7). Ambient room temp is ~29C. (It's 35C outside).

    Hope that is of interest to someone out there.
  • Spunjji - Thursday, February 7, 2013 - link

    Definitely of interest. It's odd how the Graphics scores increase dramatically with Crossfire, as you'd expect, but somehow the Overall scores don't. Seems they've either calibrated the engine or the score generator such that it leans towards overall system performance more than the GPU. Time will tell whether that gives us a reasonable picture of the system's performance, but past experience suggests it won't.
  • Notmyusualid - Friday, February 8, 2013 - link

    Yes, good point!

    The title of the product implies it is for testing 3D performance, yet with significant increases in GPU performance, it is shocking to not see that increase in performance reflected in the overall scores.

    I took another look at some other scores online, and it appears to me that the 'Physics Score', is really just a CPU test.

    And I thought 3DMark06 was being deprecated for focusing on CPU performance to much (as well as for not being DX11 capable), rather than simply reflecting overall 3D performance, as we'd expect from such a title.

    But then, I guess, some games are more CPU-dependent, than others, so maybe it would be a mistake to leave out this sort of test for the average user looking to benchmark his systems' overall gaming performance? I can't say for sure.
  • Krysto - Saturday, February 9, 2013 - link

    I'm really skeptical about 3dmark11 outputting scores that have 1:1 parity between DirectX machines and OpenGL ES 2.0 machines. If it doesn't, then it would be pretty useless for Anandtech's benchmarks, because you're trying to compare GPU's and hardware, not graphics API's.

    So if say Tegra 3 on Nexus 7 gives a 1000 score, and the Surface RT gives a 1500 score, because the benchmark gives higher score to certain DirectX features, then the benchmark is useless. Because it was supposed to show the GPU's are equal, and it won't.

    That's just a speculation for now, but Anand and the guys should really watch out for this.
  • Krysto - Saturday, February 9, 2013 - link

    To be clear, if the drivers are better on one device than the other, then the benchmark SHOULD reflect that, and I'm sure it will. Also it should reflect a higher score if the graphics look better on DirectX machines or anything like that (although that will probably come with a hit in battery life, but that's another issue).

    What I'm saying is that if everything else is EQUAL , DirectX shouldn't get higher score, just because it's a more complex API than OpenGL ES 2.0. That wouldn't be on its merits.

    Also, I'm very disappointing they are making such a big launch out of this, and they aren't even going to support OpenGL ES 3.0 out of the gate, even though it will probably be almost a year even before they release their OpenGL ES 2.0 benchmark, compared to the time when OpenGL ES 3.0 was launched last year.

    Clearly they didn't want to prioritize the OpenGL ES part much, even 2.0, let alone 3.0. We might not see 3.0 support until mid 2014 at the earliest from them. Hopefully GLBenchmark 3.0 will come out this year.
  • shuhan - Wednesday, February 13, 2013 - link

    Anyone knows what might be the reason for this:

    http://www.3dmark.com/3dm/201917

    ?

    Thanks
  • shuhan - Wednesday, February 13, 2013 - link

    Just saw that:
    "Why is my Ice Storm score so high with my M17x R4 with a 3720QM and 7970M?

    http://www.3dmark.com/3dm/18860

    129064 overall
    244590 graphics
    48646 physics"

    My result is exactly the same! My rig: Intel Core i5-3570K @4.3, 2x GTX 670

    And then my Cloud Gate test scores lower.
  • failquail - Sunday, February 17, 2013 - link

    Certainly seems pretty :)

    I have CPU and GPU monitoring gadgets running on my second screen, i noticed that whilst the first two tests seemed fairly balanced for CPU/GPU, the third firestrike test maxed out the GPU usage the entire time with the CPU barely hitting 30% usage. A test for the sli/crossfire crowd i think :)

    Not sure if it's just a display bug, but it didn't detect my GPU driver at all (ati 6950 modded with 6970 shader count, but GPU/RAM clocks still a little lower than a default 6970) and it detected my fx-8120 CPU (set to 3.4ghz or 4.2ghz turbo) as being 1.4ghz.

    Still first go was this:
    http://www.3dmark.com/3dm/241866
    79420/14031/3423

    I need to rerun it though as i had lots of background stuff running and i think i still had AA forced in the GPU driver!

Log in

Don't have an account? Sign up now