Back to Article

  • phatboye - Wednesday, November 09, 2011 - link

    Why am I not surprised to see Intel push out another socket which I am going to bet is incompatible with the previous socket. This is exactly why I love AMD. To bad they don't have a desktop CPU that can match Intel's performance. Reply
  • GreenEnergy - Wednesday, November 09, 2011 - link

    I guess you missed the VRM goes ondie with Haswell. Meaning all the VRM part on motherboards around CPUs dissapear. Tho Anandtech is to blame for some since they missed quite a few slides.

    And with boards hitting 30-40$ with Haswell. I think we survieve.
  • GreenEnergy - Wednesday, November 09, 2011 - link

    Forgot to add the slide:
  • phatboye - Wednesday, November 09, 2011 - link

    I doubt high end board will be anywhere near $30-$40. Maybe cheap-o netbook boards though.

    Yeah I see the ondie VRM but I wasn't aware of that till this article.
  • DanNeely - Wednesday, November 09, 2011 - link


    Intel might claim "overclocking improvements", but I fear what we're going to see is VRMs with virtually no overhead to increase power except on the most expensive binnings.
  • GreenEnergy - Wednesday, November 09, 2011 - link

    We heard that story before. Yet it turned out for Intels advantage. Reply
  • wifiwolf - Thursday, November 10, 2011 - link

    yup. Thought the move from 3 chips to 2 would make it cheaper but the opposite happened. Reply
  • SleepyFE - Wednesday, November 09, 2011 - link

    Finaly someone else who thinks that way. I too am a fan of using a bit of an older board, so i can keep the cost down. I found that Gigabyte has enough firmware updates to use newer CPU-s. I can upgrade to a 6 core and it wasn't even out when i bought the board. Unfortunetly i am stuck with DDR2 RAM (fortunetly i bought RAM before the price went up). Reply
  • Starfireaw11 - Wednesday, November 09, 2011 - link

    In real terms, very few people actually upgrade their processors over the lifetime of a PC - it's just us techie types and enthusiats that do, or would like that option. The vast majority of users either don't upgrade at all and simply replace the entire system when it's past its prime or upgrade the "holy trinity" of motherboard, processor and RAM.

    By changing to a new socket, intel is not limited to an established and ageing baseline and is free to make changes to the supporting technology as they see fit, at the cost of ruffling a few feathers in the "upper middle class" of their user-base, which is neither particularly large nor profitable. The mainstream and low-end customers will replace their entire system at the end of its service life and their enthusiast, professional and server customers always want bleeding-edge and will replace components or systems regularly in order to stay there.

    By staying with rigidly defined socket specifications, AMD is forced to make compromises in their CPU designs or to limit features that can be enabled. They are also forced to rely on a northbridge chipset for much of the system functionality that intel has been slowly pulling on-die. Another downside of the fixed socket specification is that the user experience varies greatly depending on their choice of motherboard - some pretty good AMD processors have been hobbled by being packaged with an outdated chipset or motherboard. The AMD solution does allow for cheap, incremental upgrades though.

    There are arguments for and against each setup and as always, it's up to the customer to decide.
  • retrospooty - Thursday, November 10, 2011 - link

    "Why am I not surprised to see Intel push out another socket which I am going to bet is incompatible with the previous socket. This is exactly why I love AMD."

    Alot of the time it is necessary to push performance upward.

    Intel changes sockets more, sure... But the alternative side, AMD is sort of stagnating on the performance side. Their new chips are hardly any faster than the ones they had 3 years ago. In fact, Intel's chips from 3 years ago (45nm Core2 quad) still beat todays AMD chips in most cases.
  • GreenEnergy - Wednesday, November 09, 2011 - link

    Kinda funny to see. With Haswell it all happend, and beyond with the mobile CPUs.

    Its a huge realestate on motherboards that is integrated and removed to reduce footprint and power consumption, while boosting performance.
  • Anand Lal Shimpi - Wednesday, November 09, 2011 - link

    +1000 for remembering this :) Reply
  • Filiprino - Wednesday, November 09, 2011 - link

    FFFFFFUUUU Intel. 4 cores, 4 cores. 4 cores!!

    We want moar cores. Me hungry. Cores come to me. In big quantities. Now.

    At least, for the mainstream, they should up the number to 6 cores. For the enthusiast platform, 8 or 10 cores is out of question.
  • eddman - Wednesday, November 09, 2011 - link

    Why? Developers can't even write proper programs that can take full advantage of current quads, or maybe they are lazy.

    Besides, not every kind of software can benefit from more cores. Read this:'s_law

    I personally think more than 4 cores for desktops is just a waste of transistors.
  • nofumble62 - Thursday, November 10, 2011 - link

    otherwise they are just marketing number like you find in the Bulldozy Reply
  • Filiprino - Thursday, November 10, 2011 - link

    Of course, not every software can get benefits from parallelization, but I know how to get benefit out of it.

    What Intel is doing now is a waste of sillicon. Ask the people who has bought Core i7 2X00K. Wow, Integrated graphics on an overclockable processor is so useful. People who buy those procs have a discrete card.

    Heavy multitasking, rendering, encoding, virtualization and games are software that gets or will get benefit from added core.

    Come on, we are talking about haswell and its refresh (2013-2014). Today's DirectX 11 can do multithreaded rendering. Imagine what level of efficiencies and perfection DirectX 12 will get on that matter. In a time not so distant in the future we will start seeing games using multiple threads for rendering different parts of a scene. With one GPU it will be possible, and if you add more GPUs the possibilities or options increase, like you could have one GPU rendering some objects and the other GPU rendering other ones, having the load distributed with simple DirectX directives if you want, instead of the actual model where the driver has to do all the work of taking load into account and splitting the frame in a not so granular way that also it presents issues with games (scalability issues or erratic behaviour).

    The drivers should provide unified virtual memory, something that's already being worked on and for example Radeon HD 7000 series will have a unified memory model, making it easier to share data between GPUs and main memory. In two years that will be improved to the right level.

    The key is get graphics multiGPU threading/processing in the same way today you get normal application multiprocessing and multitrheading.
  • moozoo - Wednesday, November 09, 2011 - link

    I used to think that way. The future was more big cores. No the future is with custom accelerators (video encoding/decoding) and unified shader GPU "cores" that accelerate compute and graphics. Reply
  • Filiprino - Thursday, November 10, 2011 - link

    You also need more cores to control those accelerators. Reply
  • Roland00Address - Wednesday, November 09, 2011 - link

    The socket they are referring to is the mainstream socket with 1150 pins. It is only going to be 4 cores, who knows how many cores will be on the enthusiast platform. Reply
  • frozentundra123456 - Thursday, November 10, 2011 - link

    I agree. Even though not many applications now use more that 4 cores, we are talking almost 2 years before the platform comes out. Then if you expect the computer to last 2 to 4 years, we are looking at perhaps 6 years to the end of use of one of these chips. I cant believe that at least 6 cores wont be needed by then for the mainstream. When dual cores first came out, people said there was not enough software for them, not we are in the same place with 4 or 6 cores.

    Maybe AMDs moar cores will allow them to catch up eventually. I dont think Intel should rely on superior architecture and IPC to the exclusion of increasing cores eventually.
  • Roland00Address - Wednesday, November 09, 2011 - link

    either built into the macbook pros themselves, or avaliable via thunderbolt and an external gpu. Reply
  • bhima - Monday, January 16, 2012 - link

    Thunderbolt does not have the bandwidth to push the kind of graphics one would want from an external graphics card. In fact, I don't think it even has PCI-E v1.0 bandwidth let alone 2.0 or now 3.0. Reply
  • nofumble62 - Wednesday, November 09, 2011 - link

    I can date that back to 486.

    I had never upgraded the CPU on the following systems Pentium 2 (Slot), Pentium 3, Athlon, Pentium 4, LGA775, LGA1156. In fact, I always buy the motherboard and CPU as a combo.

    CPU upgrading is just a distance memory of those good old day folks.
  • Kjella - Friday, November 11, 2011 - link

    Duron 700 to Athlon 1.2 here, that was a kick-ass upgrade on the same mobo. Another opportunity was blocked because I wanted PCI express for graphics and the final chance for an upgrade I switched to an Intel Q6600 instead. Intel changes too fast to make sense, even if say Sandy Bridge and Ivy Bridge are the same who upgrades every generation? Even if you upgrade every two years which is rapid it still won't happen, Q6600 = LGA775 (2007), Core i7-860 = LGA1156 (2009), Sandy Bridge = LGA1155 (2011), Haswell = LGA1150 (2013). Still, after Bulldozer I'm pretty sure my next computer will be an Intel all the same... Reply
  • GreenEnergy - Friday, November 11, 2011 - link

    The issue is component integration. We want it, we need it. But it also comes with a sideeffect.

    Plus voltage control and so on is much more refined.

    AMDs FM1 and FM2 is incompatible as well and you simply get alot more when they aint compatible. LGA1156->LGA1155 changed power supply and a few other things. LGA1150 gets ondie VRM so its obviously its incompatible.

    AMDs FX chips didnt work in older boards, so you had to get a new board anyway, shoud you for whatever reason want one of those CPUs.

    We might get reuse of sockets again in the future as you ask for. There is only the PCH components for Intel to integrate. Abit longer road for AMD. But essentially its just a matter of time before boards are some dummy PCB with a few slots on and nothing else. If it all at that time will be integrated.

    The question is simply, what will be integrated ondie next. Memory to kill discrete GFX? Or SATA, USB, Thunderbolt etc controllers to yet again make boards smaller and cheaper.
  • 8steve8 - Thursday, November 10, 2011 - link

    Why do desktop versions of these chips top off at GT2, while the ultrabook gets GT3...

    It seems strange. Good enough graphics in IGP are a good thing, and desktops can spend the heat cost easier than ultrabooks/notebooks.

    Yes, desktop owners can buy pciE gpus, but thats missing the point... integration is good, even when there are options. Cost, energy, resources would all be saved if the igp bar was higher in desktops.
  • anactoraaron - Thursday, November 10, 2011 - link

    Perhaps screen res is an issue here. It's the only thing that really stands out. Maybe the GT3 only outperforms GT2 at your typical low res 1280x800? Otherwise I agree with you. It should at least appear on the i3 or i5 non K chips and leave the GT2 for the Pentium, Celeron and unlocked K lines. Reply
  • Arnulf - Thursday, November 10, 2011 - link

    Or perhaps the naming isn't meant to indicate that higher number equals higher performance. Take touring car racing series for example: GT1 (incidentally the same name !) is the top-of-the-top and even purpose-built stuff while GT3 is the class closest to production cars (with GT2 inbetween). So GT1 > GT2 > GT3. Reply
  • Lonbjerg - Thursday, November 10, 2011 - link

    You can put in a discrete GPU if you need the added power on a desktop...not so much in a ultrabook... Reply
  • anactoraaron - Thursday, November 10, 2011 - link

    IIRC there isn't really a big enough gain in bandwidth for DDR4 (it was too close to DDR3) which is why GPUs just jumped from DDR3 to DDR5. I would think desktop RAM will follow this as well. Reply
  • KaarlisK - Thursday, November 10, 2011 - link

    There is no connection between DDR4 and GDDR4. GDDR4 is actually based on DDR3. Reply
  • etaminaenesed - Thursday, November 10, 2011 - link

    I've read in this article that Haswell it will have the same TDP as Sandy Bridge or like older Dual Core ULV processors. Are the any chances to have fanless notebooks on the market in the next 2 years? If you look at Atom processors, especially at Cedar Trail, you will see 1.9Ghz processors running with a max TDP of 2.2W. But this ones will be found only in netbooks. So, is there any conspiracy in place, or what? Reply
  • Johnnyrock - Thursday, November 10, 2011 - link

    I still don't get why they can't just bring out one single integrated GPU at full power. Reply
  • danwat12345 - Thursday, November 10, 2011 - link

    No 6 core mobile laptop chips? That sucks. I was hoping for a 45-55w tdp 6-core mobile chip and a high-clocked 4-core 45-55w tdp, and a moderately clocked 4-core 35w tdp chip. Reply
  • danwat12345 - Sunday, November 13, 2011 - link

    NO 6-core laptop or desktop Haswell or Ivy bridge chips??
    That sucks and really doesn't make a whole lot of sense.
  • GreenEnergy - Sunday, November 13, 2011 - link

    Neither does a 6 core laptop or desktop besides for a very tiny niche market. If you want 6 cores or more. Intel says buy LGA2011 or its successor. Reply

Log in

Don't have an account? Sign up now