Back to Article

  • silverblue - Friday, January 08, 2010 - link

    Hmm. Reply
  • silverblue - Friday, January 08, 2010 - link

    Nope, it's just me (damned similar names)... however, the Clarkdale article has vanished from the front page. :| Reply
  • dnenciu - Thursday, January 07, 2010 - link

    I don't know why are reviewers so happy about Arrandale.

    You basically get 20% improvement compared to the same "clockspeed" c2d.

    What about the fact that Arrandale only goes 2.53Ghz and 2.66 for the extreme edition.

    C2D already goes to 3.33Ghz

    Yes turbo boost increases the 2.53Ghz to 3.06 but that is only if one core is used and the laptop has proper cooling.

    The c2d at 3.06 Ghz can run two cores at that speed.

    So what we are seeing is last years performance and same battery life.

    And also last years integrated GPU. That now you don't even have a choice to replace with a 9400m.

    I really feel underwhelmed by this chip release.

    Lets hope that they can improve it in the next release because for me this one is a big flop. :(
  • iwodo - Monday, January 04, 2010 - link

    I record CoreiX series was proved to be much more efficient then C2D in some previous Anand article. Now we are actually getting different results. So Westmere gets more performance by using more energy and transistors.

    And this isn't an fair comparison either, C2D platform uses an 65nm of IGP and 45nm of CPU. While Westmere gets one process node improvement in both.

    So in terms of pure Power / Peformance, it looks like C2D still has an edge. I would love Intel make an 32nm of C2D. ( Which would play well with ION2 and Apple would love it. )

    I hope SandyBridge would come soon as an True successor to C2D. Nahamlem to me is just an CPU made for Server.

    Side Notes - Intel GPU, although performance is fast enough in lowest settings, still gives worst Image Quality compare to other IGP. Which gains an unfair advantage. I hope some Internet review will point this out.
  • JarredWalton - Monday, January 04, 2010 - link

    On the desktop, Core i7 (and particularly Lynnfield) provided great idle power results. My testing of Core i7 notebooks on the other hand shows that the quad-core variant is a power guzzler. Reply
  • zicoz - Monday, January 04, 2010 - link

    How does this compare to the Clarkdale on the HTPC front? Does it support LPCM and bitstreaming? I have this dream of building a HTPC from laptop parts, and if this supports the same stuff as the Clarkdale then this could be it. Reply
  • jasperjones - Monday, January 04, 2010 - link

    Anand writes:

    "The first mainstream Arrandale CPUs are 35W TDP, compared to the 25W TDP of most thin and light notebooks based on mobile Core 2. Granted the 35W includes the graphics, but it's not always going to be lower total power consumption."


    From the benches shown here I infer: the 540M is substantially faster than the fastest available C2D. Which is to say, the T9900 (unless I forget some Core 2 Extreme model) whose TDP is 35W. There is no P-series C2D that provides the performance of the 540M. Thus, an apples-to-apples comparison is really vs. a T9900 (or Core 2 Extreme) which has 35W TDP and *no* integrated graphics.

    And even if you aren't d'accord with my statement above: logically, you can't just compare the P8xxx/P9xxx models' TDP of 25W with Clarksfield's TDP of 35W. After all, Clarksfield includes essentially all Northbridge functionality. The Northbridge for Penryn is rated 12W TDP. So, really, 35W < 25W + 12W = 37W (or, 35W < 35W + 12W = 47W).
  • jasperjones - Monday, January 04, 2010 - link

    ^^^ of course, I mean Arrandale, not Clarksfield. Reply
  • JarredWalton - Monday, January 04, 2010 - link

    Even then, TDP ratings aren't actual power requirements. They're more like a limit on the thermal output, so you need 35W of cooling on a 35W CPU, even though at idle it probably uses only 5W.

    As far as performance, Arrandale in most cases is about 20% faster. The T9900 is 3.06GHz compared to 2.53GHz on the P8700, which is a 20% performance boost. That would make the T9900 about equal to a 540M. At that level of performance, I would expect the battery life advantage to be more like 5-10% for Arrandale. (Despite the 35W vs. 25W TDP, my experience is that for typical battery life testing scenarios, the 35W TDP CPUs are not substantially worse than 25W CPUs.)
  • Wolfpup - Monday, January 04, 2010 - link

    I'm still surprised that we don't get 32nm quads yet, though I suppose from Intel's perspective it makes sense-probably make the most from their mid range "high end dual cores".

    I'm glad to see there are some new chipsets with this too. PM55 has USB problems, that OEMs don't seem to be addressing super well. There's some talk that the newest drivers from Intel combined with a hotfix from Microsoft that isn't even for this chipset fixes it, but not 100% sure if it really does.
  • Alberto - Monday, January 04, 2010 - link

    According to and the idle power is very interesting, lower then the older plataform of around 30%.
    Likely the difference between the two articles is due to a different bios. Moreover Legit has done a lot of tweaks to make the two plataforms comparable (cpu apart). In the battery test, the Monteniva laptop has a 6 cell battery instead of a 8 cell, but the 30% figure seem confirmated.
  • HotFoot - Monday, January 04, 2010 - link

    One thing I've often wondered about battery tests is variability in the batteries themselves. Of course, over time batteries wear out and life goes down - but what about the difference between new batteries, even ones of the same rated capacities?

    I would be interested to see a review such as this one, but where the battery life is tested twice - swapping batteries between platforms and taking the average. Some adaptation will probably be needed. Or, maybe a standard battery testbench used for all battery life tests - which would involve adapters for each notebook.

    My point is uncertainty. I know it's not an academic paper, but if the variability in results is 10% or higher (which my gut tells me it very well may be with batteries), the conclusions drawn from the results could be radically different. Maybe it's not that bad, and a few tests into the subject would demonstrate that.
  • JarredWalton - Monday, January 04, 2010 - link

    I had">two Gateway laptops that had the same battery design, only one was Intel-based and the other was AMD-based. After a request similar to yours, I swapped the batteries and retested. Variability was less than 2%, which is the same variability between test runs. Reply
  • kazuha vinland - Monday, January 04, 2010 - link

    Your unit was obviously just a prototype, but can we expect to see the first Arrendale laptops arriving this or next month? Reply
  • webmastir - Monday, January 04, 2010 - link

    love reading your reviews - very insightful. thanks. Reply
  • 8steve8 - Monday, January 04, 2010 - link

    when can we expect reviews of these ULV processors?

    when can we expect laptops with these ULV processors?
  • strikeback03 - Monday, January 04, 2010 - link

    And seriously, wtf was intel thinking with these names? 5 processors, all at different speeds, with either 640 or 620 in the name. If a 620LM was the same speed as a 620UM but just used less power I could see it, but there are 3 processors with 620 in the name, running at 1.06, 2.0, and 2.66GHz. The consumer also has to know that a 620M is faster than a 640LM. Reply
  • ET - Monday, January 04, 2010 - link

    I'd love to see more comprehensive mobile benchmarks, but it looks like finally Intel graphics isn't the complete crap it used to be. Reply
  • yuhong - Monday, January 04, 2010 - link

    On Intel codenames, "Clarksfield" can be easily confused with the desktop "Clarkfield". Reply
  • yuhong - Monday, January 04, 2010 - link

    Oops, I mean Clarkdale by Clarkfield. Reply
  • bsoft16384 - Monday, January 04, 2010 - link

    The biggest problem with Intel graphics isn't performance - it's image quality. Intel's GPUs don't have AA and their AF implementation is basically useless.

    Add in the fact that the Intel recently added a 'texture blurring' feature to their drivers to improve performance (which is, I believe, on by default) and you end up with quite a different experience compared with a Radeon 4200 or GeForce 9400M based solution, even if the performance is nominally similar.

    Also, I've noticed that Intel graphics do considerably better in benchmarks than they do in the real world. The Intel GMA X4500MHD in my CULV-based Acer 1410 does around ~650 in 3DMark06, which is about 50% "faster" than my friend's 3-year-old GeForce 6150-based AMD Turion notebook. But get in-game, with some particle effects going, and the Intel pisses all over the floor (~3-4fps) while the GeForce 6150 still manages to chug along at 15fps or so.
  • bobsmith1492 - Monday, January 04, 2010 - link

    That is, Intel's integrated graphics are so slow that even if they offered AA/AF they are too slow to actually be able to use them. The same goes for low-end Nvidia integrated graphics as well. Reply
  • bsoft16384 - Tuesday, January 05, 2010 - link

    Not true for NV/AMD. WoW, for example, runs fine with AA/AF on GeForce 9400. It runs decent with AF on the Radeon 3200 too.

    Remember that 20fps is actually pretty playable in WoW with hardware cursor (so the cursor is always 20fps).
  • bobsmith1492 - Monday, January 04, 2010 - link

    Do you really think you can actually use AA/AF on an integrated Intel video processor? I don't believe your point is relevant. Reply
  • MonkeyPaw - Monday, January 04, 2010 - link

    Yes, since AA and AF can really help the appearance of older titles. Some of us don't expect an IGP to run Crysis. Reply
  • JarredWalton - Monday, January 04, 2010 - link

    The problem is that AA is really memory intensive, even on older titles. Basically, it can double the bandwidth requirements and since you're already sharing bandwidth with the CPU it's a severe bottleneck. I've never seen an IGP run 2xAA at a reasonable frame rate. Reply
  • bsoft16384 - Tuesday, January 05, 2010 - link

    Newer AMD/NV GPUs have a lot of bandwidth saving features, so AA is pretty reasonable in many less demanding titles (e.g. CS:S or WoW) on the HD4200 or GeForce 9400. Reply
  • bsoft16384 - Tuesday, January 05, 2010 - link

    And, FYI, yes, I've tried both. I had a MacBook Pro (13") briefly, and while I ultimately decided that the graphics performance wasn't quite good enough (compared with, say, my old T61 with a Quadro NVS140m), it was still night and day compared with the GMA X4500.

    The bottom line in my experience is that the GMA has worse quality output (particularly texture filtering) and that it absolutely dies with particle effects or lots of geometry.

    WoW is not at all a shader-heavy game, but it can be surprisingly geometry and texture heavy for low-end cards in dense scenes. The Radeon 4200 is "only" about 2x as fast as the GMA X4500 in most benchmarks, but if you go load up demanding environments in WoW you'll notice that the GMA is 4 or 5 times slower. Worse, the GMA X4500 doesn't really get any faster when you lower the resolution or quality settings.

    Maybe the new generation GMA solves these issues, but my general suspicion is that it's still not up-to-par with the GeForce 9400 or Radeon 4200 in worst-case performance or image quality, which is what I really care about.
  • JarredWalton - Tuesday, January 05, 2010 - link

    Well, that's the rub, isn't it: GMA 4500MHD is not the same as the X4500 in the new Arrandale CPUs. We don't know everything that has changed, but performance alone shows a huge difference. We went from 10 shader units to 12 and performance at times more than doubled. Is it driver optimizations, or is it better hardware? I'm inclined to think it's probably some of both, and when I get some Arrandale laptops to test I'll be sure to run more games on them. :-) Reply
  • dagamer34 - Monday, January 04, 2010 - link

    Sounds like while performance increased, battery life was just "meh". However, does the increased performance factor in the Turbo Boost that Arrandale can perform or was the clock speed locked at the same rate as the Core 2 Duo?

    And what about how battery life is affected by boosting performance with Turbo Boost? I guess we'll have to wait for production models for more definitive answers (I'm basically waiting for the next-gen 13.3" MacBook Pro to replace my late-2006 MacBook Pro).
  • MonkeyPaw - Monday, January 04, 2010 - link

    Personally, I'm disappointed with the "unchanged" battery life. The reality is, most IGP-based notebooks don't need to be faster to most people. A few friends of mine recently bought notebooks, and they have what I call average requirements: email, browser, iTunes, office, photo management. My advice to them was that anything they buy today will be more than fast enough for their needs (they were currently running outdated machines), and that their decision should be based on things like battery life and bonus features. Even my own notebook purchase was based less on total processing power and more on price, then battery life.

    I think Intel had it backwards. Start by improving battery life, then slowly improve performance. That may have been their plan, but it looks like 32nm has a ways to go for them before that can happen.
  • JarredWalton - Monday, January 04, 2010 - link

    I would expect the LV versions to be competitive, but it wouldn't surprise me if the initial 32nm parts are not where they would like in terms of power use. The rev 2 of Arrandale will hopefully address that shortcoming. Reply
  • secretanchitman - Monday, January 04, 2010 - link

    thats the thing though - who knows what apple is going to do now, since they currently use ion/9400m in the macbooks/macbook pros. they do use intel + discrete ati in the current imacs, yet the lowest end imac has a 9400 in it. the problem with arrandale is that it has the gpu integrated on, and integrated graphics suck compared to nvidia and ati gpus.

    im hoping that intel made special versions of arrandale without the built in gpu, or they are able to turn it off, and use separate graphics instead. lets be honest, the 9400m is much better than anything intel offers now.
  • taltamir - Thursday, February 04, 2010 - link

    what does it matter? apple can't run games anyways... neither can any laptop.
    The difference between intel and nvidia/ati is that a laptop with nvidia/ati can get playable FPS on the absolute LOWEST settings which look like crap. the intel can NOT get playable FPS, period.
    Either would be a horrid experience for a gamer... get a desktop if you want to game.
  • filotti - Tuesday, January 05, 2010 - link

    Actually, the article says that the performance of the integrated GPU is equal to the performance of the 790GX IGP. This means that it should be equivalent to the 9400m too. Reply
  • mino - Saturday, January 09, 2010 - link

    Raw performance? Probably comparable to 785G in 3D benchmarketing.

    Drivers to be able to use the performance. Non existent.

    This is pretty much a beefed up HD4500. Nothing less, nothing more.
  • marc1000 - Monday, January 04, 2010 - link

    The desktop versions have a 16xPCIe slot, so I believe it IS possible to pair the mobile versions with discrete grapchis too. Not that manufacturers will be that much interested in doing so. After all, it seems that Intel aimed for the same level of performance that 9400 has, so Apple would not have reasons to go "non-Intel". I guess this is the primary reason why they made the GPU perform exactly at this level. It's the "good enough" philosophy. Reply
  • Penti - Tuesday, January 05, 2010 - link

    Of course you can, HM/QM/QS57 supports switchable graphics too and has x16 or the same 16 (maybe) lanes on-die as the desktop parts for discrete graphics. It also has 8 lanes on the chipset. Apple, HP, Dell, Fujitsu etc will be interested in more high-end business models and consumer products. Reply

Log in

Don't have an account? Sign up now