Back to Article

  • coolhardware - Monday, June 24, 2013 - link

    I would kill for some solid data on how the entire Haswell lineup does at Starcraft 2 (SC2). It is one of the only games I play and I would love to know what FPS I could expect at a variety of resolutions and quality settings.

    Any chance Anandtech can run some benchmarks? Pretty please :-)
  • Sushisamurai - Monday, June 24, 2013 - link

    Notebook benchmarks with comparable GPU setup rates the hd5000 around 60 FPS avg for HOTS on medium graphics. 24 & 14 for high and ultra high respectively :( Reply
  • althaz - Tuesday, June 25, 2013 - link

    The HD 4000 plays the acceptably with medium graphics (it's one of the primary reasons I bought a Surface Pro). Reply
  • Krysto - Thursday, July 11, 2013 - link

    Who plays Starcraft on a 10" screen? Reply
  • lolipopman - Wednesday, August 20, 2014 - link

    Who plays Starcraft? Reply
  • mikk - Monday, June 24, 2013 - link

    No RAM infos. If this Intel HD graphics comparison is mixed with Dualchannel and Singlechannel results it is misleading. Reply
  • coder543 - Monday, June 24, 2013 - link

    It's probably all MacBook Airs, so that shouldn't be an issue. Reply
  • mikk - Tuesday, June 25, 2013 - link

    There are more notebooks with singlechannel configuration sold as you might think. Unfortunately some reviewers are unable to run some tools like CPUz to check if it runs in dualchannel or in singlechannel mode. Furthermore Anands 3dmark 2013 Firestrike result is 15% slower than another HD5000 Macbook from notebookcheck. With such tests it is also important to run it in high performance mode. No info here if he did or not. Reply
  • A5 - Monday, June 24, 2013 - link

    No one is going to ship a single channel memory configuration. At least no one worth buying from. Reply
  • justsome1 - Tuesday, June 25, 2013 - link

    el-cheapo Dell ships single channel "ultrabooks" and even some Asus ones. Reply
  • Benk78 - Wednesday, June 26, 2013 - link

    Lenovo Yoga 13 is in single channel too. Reply
  • ikjadoon - Thursday, July 11, 2013 - link

    I tried searching, but nothing recent; what is the memory bandwidth difference between dual and single and what is the difference in real-world usage?

    Can't be stunningly slower, as the Yoga 13 uses single-channel and I've never seen it docked points for "slow performance."
  • TheinsanegamerN - Saturday, July 13, 2013 - link

    graphics are very bandwidth heavy. the bandwidth difference: single channel 1600 is 12.8 gigabytes per second, while dual channel is 25.6 giabyte per second. when the graphics are using system memory, every gigabyte counts. moving from single to dual channel with ivy bridge pushed over 60% higher framerates with the same laptop, so the difference with the more powerful haswell chip will be even more noticeable.
    and the yoga 13 is not put through graphically intensive work, hence the reason nobody reviews it as slow.
  • TheinsanegamerN - Saturday, July 13, 2013 - link

    for reference, the relatively old and low end amd hd 6650m has 25.6 gigabyte of bandwidth all for itself, and is considered bandwidth starved. it is also similar to intel hd 4600 in terms of game performance. and intel hd graphics have to share bandwidth with the system, so single channel is dog slow with anything intensive. Reply
  • FITCamaro - Thursday, August 28, 2014 - link

    What are you talking about? I see tons on hardware on here reviewed that includes only a single DIMM which means single channel. Reply
  • monstercameron - Monday, June 24, 2013 - link

    so that is gt3...where is the thunder? Reply
  • sherlockwing - Monday, June 24, 2013 - link

    The 15W TDP is limiting its power, my guess is you can't get to see its full power of GT3 without the 28W TDP that Iris have. Reply
  • ImSpartacus - Monday, June 24, 2013 - link

    ASUS is cramming one of those into a relatively slim 13.3" laptop. I'm excited to see how it does.

    The rMBP13'13 will probably have someone similar, but its chassis can handle 35W CPUs. So I'm hesitant to think that Apple will give up 7W (and whatever the PCH puts out). Does anyone else smell a custom 35W GT3 chip?
  • A5 - Monday, June 24, 2013 - link

    I'd wager that they use the thermal headroom to make sure it spends more time at the higher modes. Nothing wrong with running a bit cooler, either. Reply
  • dylan522p - Monday, June 24, 2013 - link

    They could just let it boost up more/longer. Reply
  • TheinsanegamerN - Saturday, July 13, 2013 - link

    this. notice that less cpu intensive games see a much higher performance boost, since the cpu cores dont have to boost as high, while more demanding games dont speed up much at all. the high temperatures also impacted the boost clocks, something else the macbook pro should remedy. Reply
  • A5 - Monday, June 24, 2013 - link

    Check the battery life tests in the main article. Reply
  • Roland00Address - Monday, June 24, 2013 - link

    So in other words there is very little reason for OEMS to pay the $50 extra for the hd5000 instead of the hd4400, unless they use cTDP up? cTDP would allow the maximum tdp to go up giving the graphics more headroom to high the higher turbos.

    Then again if the OEMs would be okay with better cooling to use the higher cTDP chips why wouldn't they just go the for the chip with the base tdp of 28 watts instead of 15? You can even get iris 5100 in the 28w chips thus you can use the iris marketing.

    Is anybody going to use the hd5000 besides apple?
  • ImSpartacus - Monday, June 24, 2013 - link

    Do the 28W chips cTDP up? And to what TDP? Reply
  • IntelUser2000 - Tuesday, June 25, 2013 - link

    There's no cTDPup in the 28W and 15W GT3 chips. The 15W GT2 does however. Reply
  • Ikefu - Monday, June 24, 2013 - link

    Very cool, definitely good for comparisons to Ivy Bridge. I'd really like to see those same game benchmarks for Haswell GT2 graphics as well. I'd like a Haswell convertible but I want to see how much an upgrade from HD4400 to HD5000 nets me. Thanks! Reply
  • mavere - Monday, June 24, 2013 - link

    The HD5000's underwhelming performance boost really is interesting because that higher price tag seems to be doing very very little, and Apple isn't the type to cut into its profit margins just for the hell of it.

    Anand, do you know if compute/openCL benchmarks perform any differently?
  • iwod - Monday, June 24, 2013 - link

    Again the main problem with Intel Graphics are drivers. Intel tends to stop development of previous generation graphics drivers once they have a new generation of graphics Arch out. Which is schedule to appear for Broadwell SoC. This wouldn't be much of a problem for since they develop their own graphics drivers. Sometimes i wonder if this means Apple get a much heifier discount from Intel since Intel spend less resources on the Mac Platform.

    Given the performance of Intel 5000 i understand why Apple doesn't want to make the Air Retina.
  • ilkhan - Tuesday, June 25, 2013 - link

    What is the program used to grab the power usage graph? Would love to be able to collect that info myself. Reply
  • mikk - Tuesday, June 25, 2013 - link

    This is Intels Extreme Tuning Utility.

    Btw, I miss also here some GPU frequency recordings with GPUz. It would be interesting to know how far the turbo goes up in games.
  • IntelUser2000 - Tuesday, June 25, 2013 - link

    It won't work with GPU-Z, you need to use HWINFO. GPU-Z is pretty crap for Intel iGPUs. Reply
  • MrSpadge - Tuesday, June 25, 2013 - link

    In the article you're often referring to these chips having "less thermal headroom". I'd rather say they are power constrained: attach a better cooler (not easy in an Ultrabook, but possible) and these chips won't perform any better, just because they're already using their full 15 W under load. If the chips were thermally limited you should hear the fan screaming.. which you didn't, as I understand from the article.

    BTW: it would be really nice to see measured power draw while running these benchmarks as well. This would make Haswell look even better compared to Ivy. And average clock speeds could also reveal some more.. maybe HD5000 has to clock so low in that 15 W config that the voltage already hits the absolute minimum and further sclaing down couldn't improve efficiency over HD4400. For this one would need to read out the voltages or at least know the frequency-voltage curves of these GPUs. Would be nice if you could do either of this :)
  • Shadowmaster625 - Tuesday, June 25, 2013 - link

    Wow what a waste it is to use HD5000. It is only fractionally better than HD4400. All those transistors... wasted. Reply
  • tipoo - Tuesday, June 25, 2013 - link

    It's primarily for the lower power required. With more EUs, they can run at lower clock speed and voltages to perform as well with less power used. In the 28W versions (presumably headed for the 13" MBP) we'll see how the GT3 can really perform when power is less of a consideration. Reply
  • Penti - Tuesday, June 25, 2013 - link

    MacBook Pros at least the 15 inch will use non-single chip quad-core processors, also the 13.3 inch version uses 35W chips today. They could bump that one to the 47W GT3e part if they wanted performance, as that would roughly put it slightly under the old 15.6 inch pros in graphics performance. You just have to wait and see what the refreshes and new models brings when it comes to Haswell, Apple or not Apple for that matter. Lots of PC's simply use the dual-core GT2-part for example. The single-chip ULT-parts doesn't have any external PCIe links for any gpu. Don't think 5000 and 5100 Iris graphics really matters that much either. 28W parts are just about CPU-performance. All depends on where they want to take it. Reply
  • IntelUser2000 - Tuesday, June 25, 2013 - link

    The 28W Iris Graphics 5100 is 30% faster in Bioshock 2 and Tomb Raider. Reply
  • IntelUser2000 - Tuesday, June 25, 2013 - link

    Compared to the HD 5000 I mean. :P Reply
  • Penti - Wednesday, June 26, 2013 - link

    Sounds reasonable when you factor in much faster cpu, and slightly faster (clocks) gpu. Reply
  • icrf - Tuesday, June 25, 2013 - link

    Honest question, not trying to troll: is there a purpose to better graphics outside from gaming or professional applications? Have we already reached a baseline of UI acceleration for common office / browsing / content consumption tasks? Basically, if I'm not running Crysis or Photoshop, should I care? Will I notice anything? Reply
  • hova - Tuesday, June 25, 2013 - link

    You will notice it when playing high res videos on the web and also when scrolling through heavy websites (if the browser takes good usage of the GPU).
    By far the biggest purpose is for higher resolution "retina like" screens. And all this is just for the "regular" office/web user. Like you said gamers and professionals will also like having more graphic performance in a more portable form factor. It's a great win/win for everyone.
  • Namisecond - Monday, August 19, 2013 - link

    I think we have reached that baseline of UI acceleration. Intel's baseline integrated graphics is now the HD. It suffices for everything aside from gaming. I'm using it in a celeron 847 windows box connected to my 1080p TV. CPU usage can be a bit high when streaming HD content from services like netflix, but I very rarely see a skipped frame, and with an SSD, performance is snappy. Reply
  • name99 - Tuesday, June 25, 2013 - link

    "increasing processor graphics performance in thermally limited conditions is very tough, particularly without a process shrink. The fact that Intel even spent as many transistors as it did just to improve GPU performance tells us a lot about Intel's thinking these days. "

    As always on the internet, the game fanatics completely miss the point when they think this is all about them. Intel doesn't give a damn about game players (except to the extent that it can sell them insanely overpriced K-series devices which they will then destroy in overclocking experiments --- a great business model, but with a very small pool of suckers who are buying).

    What Intel cares about is following Apple's lead. (Not just because it sells a lot of chips to Apple but because Apple has established over the past few years that it has a better overall view of where computing is going than anyone else, or to put it differently, where it goes everyone else will follow a year or two later.)

    So what does Apple want? It's been pretty obvious, since at least the first iPhone, how Apple sees the future --- it was obvious in the way the iPhone compositing system works with as basic elements the "layer" (ie a piece of backing store representing a view, some *fragment* of a window) rather than with a window as the basic unit. The whole point of layers is that they allow us to move the graphics heavy lifting from JUST compositing (ie CPU creates each window, which GPU then composites together) to all drawing.

    We've seen this grow over the years. Apple has moved more and more of OSX (subject to the usual backward compatibility constraints and slowdowns) to the same layering model, for example they've given us a new scrolling model for 10.9 which allows for smooth ("as butter????") scrolling which is not constrained by the CPU.

    So step 1 was move as much of the basic graphics (blitting, compositing, scaling) to the GPU.

    But there is a step 2, which became obvious a year or so later, namely moving as much computation as makes sense to the GPU, namely OpenCL. Apple has pushed OpenCL more and more over the years, and they're not just talking the talk. Again part of what happens in 10.9 is that large parts of CIFilter (Apple's generic image manipulation toolbox, very cleverly engineered) moves to run on the GPU rather than the CPU. Likewise Apple is building more "game physics" into both the OS (with UI elements that behave more like real world matter) and as optimized routines available for Game Developers (and presumably ingenious developers who are not developing games, but who can see a way in which things like collision detection could improve their UIs). I assume most of these physics engines are running on the GPU.

    Point is --- the Haswell GPU may well not be twice as large in order to run GRAPHICS better, it's twice as large in order to be a substantially better OpenCL target. Along the same lines, it's possible that the A7 will come with a GPU which does not seem like a massive improvement over its predecessor and the competition insofar as traditional graphics tasks goes, but is again a substantially better OpenCL target.

    (I also suspect that, if they haven't done so already, it will contain a HW cell dedicated to conversion to or from RGB to sRGB and/or ICC. Apple seems to be pushing really hard for people to perform their image manipulation in one of these two spaces, so that linear operations have linear effects. This is the kind of subtle thing that Apple tends to do, which won't have an obviously dramatic effect, not enough to be a headline feature of future HW or SW, but will, in the background, make all manipulated imagery on Apple devices just look that much better going forward. And once they have a dedicated HW cell doing the work, they can do this everywhere, without much concern for the time or energy costs.
  • tipoo - Tuesday, June 25, 2013 - link

    I'm not sure I follow. GT3 ULV is twice as big so that it can be clocked lower, the two cancel parts of each other out. The OpenCL performance, then, won't increase any more than game performance, as far as I can see. Reply
  • name99 - Tuesday, June 25, 2013 - link

    A better OpenCL target doesn't just mean faster. It means better handling of conditionals for example. And there are plenty of things in OpenCL 1.2 which could be done inefficiently with older HW, but much more efficiently with appropriately targeted hardware.

    This is my point. Anand et al are assuming all that extra HW exists purely to speed up OpenGL, and that it does basically a lousy job of it. I'm suggesting that most of the HW is there to speed up OpenCL, and it won't appear in benchmarks which don't test OpenCL appropriately.
  • tipoo - Wednesday, June 26, 2013 - link

    I see. I've suspected that was the route Apple was taking years ago, but so far the GPU acceleration is nothing exotic, but perhaps they've been laying a multi year foundation for some big upgrade, we never know. I think they broke the chain of OpenCL compatible GPUs one time though, going from the 320m to HD3000 if I'm not mistaken. It would be a bummer if the newer 3000 Macbooks couldn't get the upgrade while the older 320m ones were fine. Reply
  • tipoo - Wednesday, June 26, 2013 - link

    By the way, regarding the A7 comment, if it really does use the SGX 600 series/Rogue, that would also be a huge gaming boost. It supposedly hits 200Gflops, into the PS360 range. Reply
  • ananduser - Tuesday, June 25, 2013 - link

    Thank you for enlightening us on Apple's strategy and why we should all follow it as it is perfect no matter what. Reply
  • name99 - Tuesday, June 25, 2013 - link

    Really? That's what I was doing? I thought I was explaining why a doubling in the area of the GPU didn't appear to result in a commensurate improvement in OpenGL performance; along with some business background as to WHY Intel is ramping up the OpenCL rather than the OpenGL performance.

    If you have an alternative explanation for the performance characteristics we're seeing, I think we'd all like to hear it.
  • ekotan - Friday, June 28, 2013 - link

    Well, I believe name99 makes some great points. OpenCL is gaining traction not just because it accelerates massively-parallel exotic scientific algorithms which the general user would never use, but also because Apple is leveraging it to accelerate everyday operations of the OS which the general user would use constantly.

    Sales data shows that people are opting to purchase more and more mobile devices, and they want better battery life and decent performance. A discrete GPU, although powerful, cannot deliver the "better battery life" part of that equation, so Intel has a big win if they can improve their IGP to the point where it can deliver, say, 80% of the performance of a mid-range discrete GPU at 20% of the power cost. That makes sense to me.

    Gamers will still only be satisfied with their desktop machines and discrete GPUs, no change there, but that is not the target Intel is intending to go after with their IGP efforts.
  • knicholas - Friday, July 12, 2013 - link

    So if I do not game and mainly use laptop for editing RAW pics and watching 1080p videos, is the hd5000 overkill? other uses is web browsing ADD (i usually have about 10 tabs open. 5 articles and 5 loading youtube). im asking b/c im considering the vaio duo 13 but not sure if the hd5000 is better for my needs or not. Reply
  • brruno - Saturday, July 13, 2013 - link

    The HD5100 also as 128MB eDRAM , that why its called Iris Reply
  • brruno - Saturday, July 13, 2013 - link

    Forget it, after a more thorough search i found it doest ... so why call it Irís ?
    is the only difference to the HD5100 the clock speed?
  • Kadora - Thursday, September 05, 2013 - link

    Would love to see more info on HD 4600 as most desktop CPUs have this graphics model. Any difference from HD 5000? Reply
  • i3227u - Monday, January 26, 2015 - link

    What app is that? Reply
  • douglord - Thursday, February 12, 2015 - link

    Wow - Incredibly helpful. I was actually thinking about changing my 4400 (s7) for a 5200 (gs30). Seems like that wouldn't do much.

    I think we are taking this thin and light stuff too far. I don't want a 10lb gaming laptop, but I also don't need a 2 lb, 11", 1/2" thick laptop that almost fits in my pants pocket. Why can't we get a high-end, 15", 5lb brick, with a that has a 100whr battery and some decent thermals? Put a 5557U in it for "good enough" cpu performance and the ability to play a few games with its Iris 6100. Probably could get 10+ hours off a charge.

Log in

Don't have an account? Sign up now