Image Quality

Software compatibility and image quality remain understandable concerns, however Intel has improved tremendously in these areas over the past couple of years. I couldn't run Total War: Shogun 2 on Iris Pro, but other than that every other game I threw at the system ran without errors - a significant improvement over where things were not too long ago. On the compute side, I couldn't get our Folding@Home benchmark to work but otherwise everything else ran well.

On the image quality front I didn't see too much to be concerned about. I noticed some occasional texture flashing in Battlefield 3, but it was never something I was able to grab a screenshot of quickly enough. Intel seems pretty quick about addressing any issues that crop up and as a company it has considerably increased staffing/resources on the driver validation front.

The gallery below has a series of images taken from some of the benchmarks in our suite. I didn't notice any obvious differences between Intel and NVIDIA render quality. By virtue of experience and focus I expect software compatiblity, image quality and driver/hardware efficiency to be better on the NVIDIA side of the fence. At the same time, I have no reason to believe that Intel isn't serious about continuing to address those areas going forward. Intel as a company has gone from begging software developers to at least let their code run on Intel integrated graphics, to actively working with game developers to introduce new features and rendering techniques.

GRID 2 Synthetics
Comments Locked

177 Comments

View All Comments

  • kyuu - Saturday, June 1, 2013 - link

    It's probably habit coming from eluding censoring.
  • maba - Saturday, June 1, 2013 - link

    To be fair, there is only one data point (GFXBenchmark 2.7 T-Rex HD - 4X MSAA) where the 47W cTDP configuration is more than 40% slower than the tested GT 650M (rMBP15 90W).
    Actually we have the following [min, max, avg, median] for 47W (55W):
    games: 61%, 106%, 78%, 75% (62%, 112%, 82%, 76%)
    synth.: 55%, 122%, 95%, 94% (59%, 131%, 102%, 100%)
    compute: 85%, 514%, 205%, 153% (86%, 522%, 210%, 159%)
    overall: 55%, 514%, 101%, 85% (59%, 522%, 106%, 92%)
    So typically around 75% for games with a considerably lower TDP - not that bad.
    I do not know whether Intel claimed equal or better performance given a specific TDP or not. With the given 47W (55W) compared to a 650M it would indeed be a false claim.
    But my point is, that with at least ~60% performance and typically ~75% it is admittedly much closer than you stated.
  • whyso - Saturday, June 1, 2013 - link

    Note your average 650m is clocked lower than the 650m reviewed here.
  • lmcd - Saturday, June 1, 2013 - link

    If I recall correctly, the rMBP 650m was clocked as high as or slightly higher than the 660m (which was really confusing at the time).
  • JarredWalton - Sunday, June 2, 2013 - link

    Correct. GT 650M by default is usually 835MHz + Boost, with 4GHz RAM. The GTX 660M is 875MHz + Boost with 4GHz RAM. So the rMBP15 is a best-case for GT 650M. However, it's not usually a ton faster than the regular GT 650M -- benchmarks for the UX51VZ are available here:
    http://www.anandtech.com/bench/Product/814
  • tipoo - Sunday, June 2, 2013 - link

    I think any extra power just went to the rMBP scaling operations.
  • DickGumshoe - Sunday, June 2, 2013 - link

    Do you know if the scaling algorithms are handled by the CPU or the GPU on the rMBP?

    The big thing I am wondering is that if Apple releases a higher-end model with the MQ CPU's, would the HD 4600 be enough to eliminate the UI lag currently present on the rMBP's HD 4000?

    If it's done on the GPU, then having the HQ CPU's might actually get *better* UI performance than the MQ CPU's for the rMPB.
  • lmcd - Sunday, June 2, 2013 - link

    No, because these benchmarks would change the default resolution, which as I understand is something the panel would compensate for?

    Wait, aren't these typically done while the laptop screen is off and an external display is used?
  • whyso - Sunday, June 2, 2013 - link

    You got this wrong. 650m is 735/1000 + boost to 850/1000. 660m is 835/1250 boost to 950/1250.
  • jasonelmore - Sunday, June 2, 2013 - link

    worst mistake intel made was that demo with DIRT when it was side by side with a 650m laptop. That set people's expectations. and it falls short in the reviews and people are dogging it. If they would have just kept quite people would be praising them up and down right now.

Log in

Don't have an account? Sign up now