Back to Article

  • Hulk - Monday, December 05, 2011 - link

    but I still wish we'd be seeing a hex IB. Video editing and transcoding is one application where the additional cores would definitely be helpful. But of course as you mentioned, Intel doesn't want to savage it's higher end solutions.

    I suppose we'll eventually get to the point where the high end parts will be 8 or more cores and then the mainstream parts can go to 6 cores.

    Too bad BD wasn't more competitive...
  • euler007 - Monday, December 05, 2011 - link

    There probably will be a socket 2011 IB-E, but since they just launched the SNB-E cpus and the new chipset they're being hush about it so that the people with deep pockets that want the best and most expensive toys are not tempted to wait for it. Reply
  • ncrubyguy - Monday, December 05, 2011 - link

    Yeah... The George Lucas Extreme Collectors Edition with those last 2 cores enabled. Reply
  • DigitalFreak - Monday, December 05, 2011 - link

    "Video editing and transcoding is one application where the additional cores would definitely be helpful"

    Maybe, but wouldn't you get the same or better performance for these tasks using Quick Sync?
  • JarredWalton - Monday, December 05, 2011 - link

    Potentially, but there's also a risk of a loss in quality, and to my knowledge the major video editing applications (e.g. Adobe Premiere) don't use it. Reply
  • plonk420 - Monday, December 05, 2011 - link

    *old* x264 (i.e. a GOOD encoder) docs say "The quality loss from multiple threads is mostly negligible unless using very high numbers of threads (say, above 16)."

    6core/12thread may barely hit that limit, but i doubt it's notable. i'll have to try some tests if i feel motivated to do so later...

    video *editing* probably doesn't matter.
  • Sivar - Wednesday, December 07, 2011 - link

    x264's quality loss as threads increase is linear. 8 threads will lose about twice as much quality as 4. I suspect that documentation, if fleshed out, would say something like "When encoding video at a set bit rate, the quality loss from multiple threads is nearly imperceptible until one encodes with, say, 16 threads or so."

    The loss is somewhat measurable. If you encode a video in quality mode, only the file size increases.
  • piroroadkill - Monday, December 05, 2011 - link

    Quick Sync has to be the least supported piece of technology around.

    I own a 2500K and a Z68 board, but only a couple of amateur level commercial products actually support it.

    Tell me when regular x264 supports it.
  • MonkeyPaw - Monday, December 05, 2011 - link

    You mean an Intel's graphics engine has poor support? :-0

    Maybe, just maybe, if Intel would leave QS enabled in ALL its CPUs, we might actually see real support for it. But that's not the Intel business model. It will probably take AMD's copying of the idea and using it across their lineup before we see Intel do it, too, just like we did with CnQ/Speedstep and 64bit.
  • Zink - Monday, December 05, 2011 - link

    There also isn't a huge demand for QS today. Video on mobile devices is becoming more common but cellular data and video acceleration are becoming better so you can either stream a video or play the full quality 1080p file from your PC. I think the % of consumer that has Quick Sync and transcoded videos or rips them from BD is very tiny. There are plenty of people who do video professionally but the quality issues with QS aren't worth it for professional applications. Maybe if QS could be integrated into video editing software to be used to generate the video previews and then use standard encoding for the final product. Reply
  • name99 - Tuesday, December 06, 2011 - link

    The real win for QuickSync is not video transcoding --- few people do that. It's high quality video conferencing.
    This is probably rare in the bulk Windows world because HiDef webcams are rare on cheap laptops.
    The win, again, is not so much reducing your CPU load, although that is nice --- it is doing the work at lower power.

    OSX, for example, uses QuickSync for iChat HD, even though Apple doesn't (yet --- I expect this will change) use it for anything else.
  • icrf - Tuesday, December 06, 2011 - link

    Is video chat really that popular? I've seen ads on TV for two decades now, everyone from Cisco to Apple telling me it's awesome, yet I've still never known anyone who used it. Reply
  • icrf - Monday, December 05, 2011 - link

    QuickSync is a hardware encoder. x264 is a software encoder. There's not really any combining. They're completely separate, independent pieces. You can't use one without becoming it.

    x264 has long been a quality-driven encoder, first and foremost, so it's always been my personal preference. That means I don't care about QS and just want more cores. I don't really care about onboard graphics, either, since I always run a discrete card. That should make me a perfect candidate for the -E segment, but I'm just not willing to spend the money. I don't think a SNB-E is significantly more expensive or difficult to create than SNB, so I have a harder time justifying it. It's just targeted at a smaller, generally wealthier subset of the market that me, an otherwise average consumer, happens to be sitting in.

    What else does the "willing to pay a premium" market segment want that Intel could hold back while still giving some consumers more cores?
  • piroroadkill - Thursday, December 08, 2011 - link

    I thought QuickSync was just Intel marketing BS for using their onboard GPU to do the encoding work.

    I can't imagine why your quality would be horribly affected if you had the GPU do some work.

    But what do I know, seriously, I'm not on the x264 team, and my hat is firmly tipped to them. It is the best encoder, of course.
  • BSMonitor - Tuesday, December 06, 2011 - link

    BD would not have changed the core count in Ivy Bridge. Intel's current RoadMap had mainstream as Quad-Core. Period. They would not redesign the die last minute to include two more CPU cores.

    For you to want six cores on a CPU that includes an integrated GPU doesn't really make much sense in terms of the vast majority of people buying these mainstream IVB processors. So, you want 6 cores? SNB-E or wait until IVB-E.

    More than that, all CPU cores are not created equally. 6-8 K10 cores don't even come close to SNB or IVB cores. A quad core IVB will transcode nearly on par with any 6-core processor out today.
  • ajp_anton - Monday, December 05, 2011 - link

    Basically we need the high-end CPUs to become 8-core first, so let's hope IVB-E will become that so that Haswell can get 6-core CPUs in the "low-end". I can make good use of many cores, but I also need an IGP in my small "portable" computer with all PCIe slots in use for other things than graphics. Reply
  • Zink - Monday, December 05, 2011 - link

    There's also the ITX segment where the low end smaller socket is the only option and having CPU graphics is useful. Reply
  • DanNeely - Monday, December 05, 2011 - link

    SB-E dies are 8 core. But either due to poor yield or to push market segmentation even harder; they're only going to be used for Xeon's that make the i7-extreme look cheap. I suspect we'll find out which in about 6 or 8 months when Intel either gives 1 or 2 multiplier bin increases across the SB-E line, or drops an 8 core down to the extreme slot. Reply
  • DigitalFreak - Monday, December 05, 2011 - link

    "Video editing and transcoding is one application where the additional cores would definitely be helpful"

    Maybe, but wouldn't you get the same or better performance for these tasks using Quick Sync?
  • GrizzledYoungMan - Monday, December 05, 2011 - link

    Not exactly. In the pro video realm, quality matters just as much as speed. Unfortunately, the quality of encoding produced by the programs that can use Quick Sync is inferior to commercial grade transcoding applications (like, say, anything offered by Telestream).

    In general, the results of GPU-assisted transcoding are regrettably poor compared to good software transcoding. And software transcoding is generally more flexible in terms of adjusting settings and updating to new codecs/processes. Hence, pros go for lots of sockets and lots of cores.
  • Taft12 - Monday, December 05, 2011 - link

    "In the pro video realm, quality matters just as much as speed."

    Actually quality matters MUCH MORE than speed. You can always throw more hardware at the problem (multi-socket workstations for starters)
  • RussianSensation - Monday, December 05, 2011 - link

    Valid point, however, with the smartphone and tablet revolution upon us, I imagine most people will want to convert movies/videos to watch on their smaller 1280x720 smartphone. I think for those purposes, the ability to convert video quickly using QuickSync or discrete GPU is sufficient. Put it this way, the majority of the world is going to laptops/smartphones and tablets. If the consumer trends are pointing to more power efficient mobile devices, it's clear those same users aren't concerned with the best image quality that takes 10x longer to achieve. I'll take a 25 min video conversion to watch a movie on the subway over waiting 5 hours to achieve superior image quality I won't even notice.

    With IVB being 4 core and shifting to 22nm, overclocking potential should be far greater than 6 core SB-E. In games and most tasks, the $200-300 IVB chip should easily beat a $500-1000 SB-E. That leaves server and workstation (rendering) users really wanting a 6 core CPU. Looking at 6 core Phenom II 1100T and 8 core FX-8150, it's obvious that today clock speed, performance/watt and performance per clock (IPC) are far far more important than having more cores. Even lower end Core i3s have no problems cleaning up Phenom II X4s in games.
  • iwod - Monday, December 05, 2011 - link

    Both Speed and quality matters. QuickSync is Slow, and of poor Quality. That is the problem

    x264 fastest settiings can produce Video Files marginally faster then QuickSync and much better quality.

    The new QuickSync in IB will be 50% faster, but even then this is very slow in my opinion.
  • twotwotwo - Monday, December 05, 2011 - link

    If this isn't a major microarchitecture revision, any idea where that's coming from? Reply
  • Kristian Vättö - Monday, December 05, 2011 - link

    Tri-Gate. Intel is claiming 18% performance increase at 1V compared to regular planar transistors: (the regular article doesn't work for me for some reason). Reply
  • MrSpadge - Monday, December 05, 2011 - link

    This "performance increase" says "you can clock me 18% higher".. which you'd have to do in order to translate this advantage into faster execution times. Intel is not doing this (yet), see previous post on IVB clock speeds.

  • Zink - Monday, December 05, 2011 - link

    yep, Kristian is confuzled. Performance won't really be any higher. They're shipping the same clocks but at lower TDPs than SNB and its the same architecture. Performance is 15% higher TDP for TDP. Reply
  • JarredWalton - Monday, December 05, 2011 - link

    As the text notes, Intel is claiming 15% performance improvement due to caching and other architectural improvements. This is comparing the i7-2600 to the i7-3770, both of which are 3.4GHz parts with similar Turbo Boost. Here are the numbers:

    SYSMark 2012: 7% faster
    HDXPRT 2011: 14% faster
    Cinebench 11.5: 15% faster
    ProShow Gold 4.5: 13% faster
    Excel 2010: 25% faster (mostly from cache)

    I will come right out and say that I'm not familiar with at least two of those tests, but despite the AMD brouhaha SYSMark 2012 appears perfectly reasonable as an office application benchmark. It's also the lowest improvement, not surprisingly.
  • maroon1 - Tuesday, December 06, 2011 - link

    i7 3770 has Turbo of 3.9GHz which is 100MHz higher than i7 2600 Reply
  • Kristian Vättö - Monday, December 05, 2011 - link

    In the slides provided by Intel, i7-3770 performs about 15% faster on average than i7-2600. Both are clocked at 3.4GHz. You are right that the Tri-Gate isn't the main reason (my bad, sorry), but there must be some architectural changes that improve the performance (and that's what Intel is claiming). Of course, keep in mind that these are provided by Intel. I think in real life, we will be looking at less than 10% performance increase as a whole. Reply
  • hechacker1 - Monday, December 05, 2011 - link

    Perhaps some of that increased performance is due to the lower TDP and increased efficiency per watt. Ivy could potentially stay in Turbo far more often without exceeding the overall TDP limit. Reply
  • haukionkannel - Monday, December 05, 2011 - link

    That seems very reasonable explanation! That Tri-gate offers some improvement, but low TDP does actually make it more easy to use higher turbo more often! Reply
  • Iketh - Monday, December 05, 2011 - link

    aaaHAH! Yes, that is it, or probably 75% of it anyway... Reply
  • Taft12 - Monday, December 05, 2011 - link

    The 15% figure is coming from marketing.

    I'd be shocked if you could squeeze 15% faster performance from IVB at the same clockspeed except in the most useless of synthetic benchmarks (but don't worry, there'll be plenty of those!)
  • MrSpadge - Monday, December 05, 2011 - link

    Yeah, that would actually be a hefty tock, not a tick.

  • name99 - Tuesday, December 06, 2011 - link

    Just because it isn't a "major" rev doesn't mean that there won't be minor mods. There usually are. They tweak the L2 and L3 latencies, maybe add a few more "virtual" registers, maybe make the buffers holding preloaded instructions, or post-decoded instructions a little longer, etc etc.
    Along with that, there are sometimes ideas that were put in the SB micro-architecture that will add a % or two to performance, but which were disabled for SB because they couldn't be made to work in time, but they're now working in IB.

    At this stage of the game, it would be surprising if all these add up to 15% rather than, say, 5%, but I think we have to withhold judgement until Intel gives us the real micro-architecture details.
  • Marlin1975 - Monday, December 05, 2011 - link

    Same thing was said when dual core was just coming on and later Quad.

    Things are not going to be coded for dual, quad, etc... until there are enough on the market.

    "Build it and..."
  • JarredWalton - Monday, December 05, 2011 - link

    Quad-core chips have now been on the market for over five years, and there are still regrettably few applications that can leverage the additional cores, particularly when we look at the apps that people use 95% of the time (e.g. web browsers, email, office apps, and to a lesser extent image editing). There are plenty of tasks that can get split in two, but splitting them into four independent tasks isn't always possible, and taking it to six, eight, etc. results in most things reaching their limit of subdivision.

    3D rendering is a great example of a task that scales almost perfectly, and video transcoding is right there with it, but what can you do to make Word utilize (or even need) more than four cores? Sure, multitasking, but even then you're either going to run into bottlenecks elsewhere (e.g. HDD needs to be replaced with SSD for it to scale), and while you can do something like a virus scan, video transcode, and play a game on a sixteen core monster... who actually does that sort of thing?
  • DanNeely - Monday, December 05, 2011 - link

    Chrome/IE's process per tab and to a (much) lesser extent FF/Opera's seperate plugin processes kinda sorta take advantage of multiple cores in that they reduce the ability of a badly behaving tab from strangling the browser 's performance by going into an infinite loop. Reply
  • name99 - Tuesday, December 06, 2011 - link

    "Chrome/IE's process per tab "
    This is a perfect example of a "not-a-real" solution.
    Yeah, it kinda helps if you're running Google's hoped for world of really heavy-weight JS apps in multiple windows, but that's not most people.

    What most people want is "I open a page and it appears immediately", not "I can run three heavy weight, constantly running JS pages in the background".

    And you ultimately admit this yourself: "kinda sorta take advantage of multiple cores in that they reduce the ability of a badly behaving tab from strangling the browser 's performance by going into an infinite loop." That's nice --- but the point of better hardware is not to deal with crap web pages. (Google's solution of shunting them to ever lower in the search rankings is a much better solution).

    What we WANT is an engine that will run all the heavyweight parts of processing a web page in parallel. We simply don't have that --- no point in pretending otherwise.
  • Iketh - Monday, December 05, 2011 - link

    hey now, what you got against virus scanning, transcoding, and gaming all at once lol... i admit i am guilty of this on more than one occasion, and on a measly 2600k... starcraft 2 and FSX don't do well in this situation tho Reply
  • tomvh - Monday, December 05, 2011 - link

    Can't agree more. I went to a Core 2 Quad 6600 and three years later, I'm still very disappointed at how few applications are multithreaded.

    If you look at task manager, you see a thread count in the 600 to 900 range, at least you have extra thread handling capacity available. Hyper threading which I don't have would even help more.

    ' Waiting for a low TDP IB Quad for my next HTPC build mainly for transcoding my movie media backups to my media tank for whole house viewing over my cat6 gig network.
    Heck the Core2 Duo 2.6 in my media tank server can display any Blu-Ray fine.

    Otherwise, I would keep running my Q6600 til the whole box died.

    So much for progress "If your not a gamer and need it"
  • ltcommanderdata - Monday, December 05, 2011 - link

    If Ivy Bridge really does have a decent IGP and now that it has better support for GPGU with OpenCL 1.1 and CS5.0, wouldn't Intel encouraging developers to make use of GPGPU for those few cases where extreme multithreading is need work out as a good compromise? ie. high clock speed quad core + decent GPGPU vs. slow clock speed hexacore + slow/no GPGPU. Reply
  • DanNeely - Monday, December 05, 2011 - link

    Maybe, but for as poorly as intels current IGP stacks up with even low end discrete GPUs, I suspect we'll have to wait at least until Haswell for an IGP that's enough faster than the CPU at OpenCL to justify using it for performance reasons instead of only having a single code path Reply
  • hardapple - Monday, December 05, 2011 - link

    We software people have NO IDEA what to do with them. Concurrency is a huge unsolved problem, which actually breaks down into two or three major unsolved problems. Our brightest minds are completely clueless on this subject. I don't think the problem will ever be solved, because concurrency is fundamentally at odds with the way human beings think. We are not multithreaded multitaskers.

    To paraphrase Donalth Knuth (aka the Father of Computer Science), "I've worked on over a thousand computing projects in my career and I can't even think of 5 that would benefit from concurrent algorithms." He calls it an attempt by the hardware people to push the blame for the death of Moore's Law onto software people.

    Multicore hysteria is going to lead to a lot of wasted silicon. That die space should be devoted to cache, GPU, and SoC functionality -- things that will actually have huge benefits for users.
  • tipoo - Monday, December 05, 2011 - link

    Well I think the four in IB is fine, but as for the push towards six I agree. I'd rather that thermal headroom be used for faster quads or better on-die graphics like AMD went for. Reply
  • sticks435 - Monday, December 05, 2011 - link

    This. There is a thread over at discussing the AMD 'we're not going to compete with Intel anymore' announcement, and a big point of that is Intel will slow down the pace of innovation because they don't need to compete anymore, which would result in quad core CPU's for a long time. Most everyone was complaining about it, but a few were happy, because it would allow software to catch up to the hardware. Reply
  • hardapple - Monday, December 05, 2011 - link

    AMD announced their first dual-core CPU in June 2004. Before that, SMP systems with 2 CPU's had been around for ages. Yet after all this time, no one has figured out how to use these extra cores. They still sit idle most of the time.

    Maybe software is never going to "catch up" to the hardware, and why should it? Taking existing software and making it multithreaded almost always comes at the price of stability. Programmers simply cannot think like a multicore CPU. So it's no surprise that our attempts at multithreading crash and crash hard.

    Intel and AMD should stop at 4 and devote all their effort to improving single-threaded performance. Even if they can't keep up with the expected doubling of performance (and they can't), they will still be making hardware worth buying. I can live with a 10% performance improvement every 2 years if it comes in the form of an SoC with all day battery life.
  • DanNeely - Monday, December 05, 2011 - link

    Multisocket SMP systems were priced outside the consumer market; so their failure to develop any useful consumer related apps isn't that telling. The database and web server applications that many of them were (and still are) running scale well across multiple cores since they're able to process multiple requests in parallel. Reply
  • MrSpadge - Monday, December 05, 2011 - link

    Agreed. 2 cores did make sense (running separate programs), quad is a nice "safety buffer" in case that demanding app comes along.. as long as it doesn't cost much and you can deal with the power consumption. However, for even more cores there's dramatically less benefit for an average PC.

    Actually I think highly clocked / turboed dual cores with HT are the sweet spot for such applications. Sadly Intel is charging pretty much as much for them as they do for quads.. since they know the duals are as good, regarding general useability.

    This is from someone who doesn't fit the average profile and can keep many threads busy most of the time..

  • kyuu - Monday, December 05, 2011 - link

    Spot on.

    Just as the article says, scaling beyond 4-cores is pretty minimal and only really relevant for a limited number of professional applications. Those users already have platforms they can spend up on to get more cores (or just get Bulldozer if IPC isn't important). For mainstream users, there are much better uses of the silicon than adding more cores.
  • alent1234 - Monday, December 05, 2011 - link

    the SQL 2012 pricing came out a few weeks ago and it's actually cheaper for us to buy brand new quad core CPU servers to replace the 6 core ones we bought earlier this year. reason is that SQL 2012 is now licensed per core. $6874 per core to be exact for the enterprise version

    while the people who only repeat the tech acronyms will be dreaming of more cores, i've already read from other DBA's that they recommend their employers to go less cores. especially since there are very little benefits to going multi core except for a few workloads

    and for most desktop workloads the software hasn't been optimized for multicore, won't be or there is very little extra performance
  • silverblue - Monday, December 05, 2011 - link

    ...the i5-750 was Lynnfield, not Clarkdale.

    In any case, the i5-750 is still a very good CPU and would easy beat the A8-3850 in CPU tasks even with a clock speed deficit and even without turbo enabled.
  • kyuu - Monday, December 05, 2011 - link

    Sure, but the Llano can deliver more than adequate CPU-power for most tasks along with graphics that can run many mainstream games at halfway decent resolutions/settings in a much lower thermal/power envelope.

    Trinity, if they can do what I hope and include some modified/improved Bulldozer "cores" that deliver at least somewhat improved CPU-power along with what is sure to be really kickin' graphics performance with excellent power/thermals, will be a home-run with ultrabooks and would probably entice me into picking one up.
  • Zink - Monday, December 05, 2011 - link

    What we need to do is clone Anand so that we can get some decent editing done. Maybe Anand can get a lab monkey to move in with him and run benches while Anand does his magic videos and edits articles like this to keep site integrity up. Sometimes I can't tell who's writing it is from the style but the misunderstandings and mistakes make it obvious with a lot of the other writers. Reply
  • JarredWalton - Monday, December 05, 2011 - link

    Sorry, I put in Clarkdale while going through that section of Kristian's article as an example of how CPU performance needs haven't really improved. You'll be happy to know that it now reads Lynnfield, which of course changes everything because people worry more about codenames than the actual model numbers, right? Seriously, every time someone says "site integrity" because of a very minor mistake, I just laugh at the apparent need for hyperbole on the Internet. Reply
  • Zink - Monday, December 05, 2011 - link

    "PCIe 3.0 should also make 16 lanes fine for dual-GPU setups, reducing the market for SNB-E even more."
    Imply to people just learning about this stuff that 8x/8x on PCIe 2.0 is possibly not fine for dual-GPU. I don't think we know yet if there will be any noticeable performance hit when using next gen GPUs with PCIe 2.0 8x class bandwidth.

    "These three (well, techically two because Kentsfield consists of two dual-core Conroe dies) chips are the only "real" quad-core CPUs from Intel."
    Leaves out Yorkfield and Lynnfield, Lynnfield being the big one hear because it integrated PCIe and allowed the X58 northbridge to be dropped in favour of the P55 PCH.

    "In 2008, Nehalem moved the memory controller onto the CPU die, which allowed Intel to get rid of the Northbridge-Southbridge combination and replace it with their Platform Controller Hub."
    X58 was a northbridge that did only PCIe, there was also the ICH10 southbridge.

    "Sure, IVB is about five months away, but I doubt Intel wants to relive the Sandy Bridge vs. Nehalem (i7-9xx) situation--even Bloomfield vs. Lynnfield was quite bad."
    Call it Sandy Bridge vs. Gulftown to make it simpler. I don't know why Intel dropped the individual CPU names and decided to only use the architecture name. Sandy Bridge is LGA1155 while Sandy Bridge E is LGA2011. It also means when they die shrink to Ivy Bridge we can't keep calling it sandy bridge like we did with Nehalem.
  • Zink - Monday, December 05, 2011 - link

    "Sandy Bridge vs. Nehalem (i7-9xx)"
    I see what you're saying, the i7-950 is technically "higher end". Sandy Bridge was such an obvious upgrade over Bloomfield and the prices were the same so I didn't consider that. I was thinking Sandy Bridge VS Gulftown because that was a situation where the much more expensive processor often provided a poorer user experience.

    I agree, there is no way Intel could release a six core LGA1155 CPU now, mainly because of the marketing. The SoC and six core scaling are also good arguments when combined because Intel saves money by not having manufacture a six core die along with their four core and two core products. If competition with SNB-E wasn't an issue it would be easy to make a few six core LGA1155 CPUs but the extra cost of having more products isn't necessary, especially with no competition from AMD.
  • Kristian Vättö - Monday, December 05, 2011 - link

    I give you the Lynnfield part (added now). I didn't remember that Nehalem still needed two extra chips, and that it wasn't until Lynnfield when we got on-die PCIe. Yorkfield still consisted of two dies so it's basically the same as Kentsfield but at 45nm instead. Technically a different die, but in the end the same. Reply
  • Zink - Monday, December 05, 2011 - link

    " Even then, most consumers would opt for the IVB platform due to cheaper motherboard costs and lower TDP."

    If this doesn't show AMD failing to compete I don't know what does. Intel deciding not to make a product because it is what customers wan't.
  • Roland00Address - Monday, December 05, 2011 - link

    Ivybridge is going to be intels first mass production of 22nm chips, when you are doing a new manufacturing process you do not mess with the design and make it more complicated than you have to, if you screw it up you will have crappy yields and crappy yields means loss money. Graphic cards and SoC manufactures call this a "pipe cleaner", intel calls it a "tick."

    Once you know your manufacturing technique, once you sure of it and yields are great, then you get all creative and do your planned monster (what intel calls a "tock".)

    Intel not needing to release a 6 core for most mainstream users do not benefit due to software, and amd not providing enough competition thus a mainstream 6 core cannibalize their high end and reducing profit is just gravy.
  • LV3 - Monday, December 05, 2011 - link

    Although Intel is still making great improvements, I can't help but think it's not as good as it could be because the main reason imo the Intel execs are holding the leash back on their engineers is because they would be competing with their own processors.

    Here, Intel is competing with itself. If AMD was at all a threat, Intel would have no qualms about making Ivy Bridge even better.
  • IntelUser2000 - Monday, December 05, 2011 - link

    This is actually most, if not ALL due to ARM. In 2008, Haswell was slated to bring 8 cores to mainstream computing. Now its 4. This year they announced significant strategy change to lower power across the entire product line. That's what's happening now. Haswell will still get all the performance clock and other improvements, but not the extra core count(maybe in the high end, but not for vast, vast majority). Reply
  • Ananke - Monday, December 05, 2011 - link

    Intel just doesn't need to offer a 6-core IB, because it has no competition. Instead, Intel will have double profit margins due to the smaller die, and will sell at the current or slightly higher prices of SB analogs. Customer is happy - he gets 10-20% performance increase; Intel is happy - it gets 50-60% margin increase :):):). Reply
  • plague911 - Monday, December 05, 2011 - link

    I want a a cpu at roughly 200~300

    I want a discrete graphic card in the 200-300 range
    and I want pcie 3.0 and usb 3.0

    I can get an ivy or sandy bridge chip with integrated gfx which I find a waste of money
    Or I can splurge on a $500 sandy ex and get a 6 core which wastes money because many programs do not support that many threads so the splurge is not worth it.

    So ya intel is forcing me into into a buying situation where I am not being offered where i want.

    Yes I am being selfish buy I assume others are in the same position as me.
    This is what happens when AMD becomes a failure on the consumer front :(
  • kyp275 - Monday, December 05, 2011 - link

    Wonder if it'll be worthwhile to upgrade to Ivy Bridge from Bloomfield for gaming purposes... might be worth it for the reduced TDP and more OC overhead, even if there's no huge performance jump? Reply
  • Shadowmaster625 - Monday, December 05, 2011 - link

    That's the technical term for it. Milking. Intel is doing it because they can. That's clearly what they're doing with the new atoms. They wont release the new atom because they know it will castrate their CULV chip sales. They know that all 99% of notebook users needs are met with pentium B950. If an atom even comes close to matching that in cpu and gpu performance, they're yoy revenues will go down. Cant let that happen. Of course its gonna happen anyway, but they can extend and pretend for quite a while. Reply
  • drbaltazar - Tuesday, December 06, 2011 - link

    intel:they release a new idea say in 2010 ,then at their next evo they insanelly polish that idea!
    amd work differently they dont die shrink as often as intel but they do polish their way more often then intel that is why at time amd look like its behind (any remember when 45 nm was released and then later they polished it to x6 wich is surprisingly good
    only reason intel looks like it is ahead is because amd is using lot of futur stuff.(fma4,xop,the way the cache work isnt supported by w7 neither is a lot of other stuff made from amd fx,even when w8 is released a lot of stuff still wont be supported or plain mainstream.we ll have ways to test fma4 but average program optimise for it?unless ms optimise THEIR full program for the various toy amd made avail i dont see how it can happen ,as for intel fma3 is a long way in the futur,fma4 nobody 1080p 23 inch a bottleneck occur be it on intel or amd it is impossible for both proc to be so close they re within margin of something is limiting(probably on the mobo side (they tend to be slow to adopt )or thing is sure.i ve beeen using computer since the 80s and it never happen to be within the margin of error like this.lot of people are looking into this on both side of the fence(probably why fx sell so many darn proc ,often out of thing is sure this situation has to be found out before w8 release .one thing is sure a lot of people just upgraded smaller and will forgo this gen (fx)espacially with the x6 at 140 and the 980 at 90 on amd site.most will do a serious upgrade once they know why intel and amd are within margin of error in so many test!
  • beginner99 - Tuesday, December 06, 2011 - link

    ...why some people think this is an intel biased site. Of course the mentioned points make sense but why is Ivy bridge threatening SNB-E? Because SNB-E is a crippled E platform. only 2 Sata-3 ports? Seriously? and the total count of 6 sata ports is rather low too. I have a 1156 (lynnfield) and use 6 ports. I mean just a default ssd+ HDD + optical already uses half of the ports. Reply
  • JarredWalton - Tuesday, December 06, 2011 - link

    Okay, I'll bite: what does any of that have to do with us being "Intel biased"? We list marketing (market segmentation as well) as a reason why IVB is quad-core and that makes us Intel biased? Hmmm.... Reply
  • name99 - Tuesday, December 06, 2011 - link

    " Thus, instead of hex-core, we get a chip that looks much the same as a year-old Sandy Bridge, only with improved efficiency and some other moderate tweaks to the design. Let's go through some of the elements that influence the design of a new processor, and when we're done we will have hopefully clarified why Ivy Bridge remains a quad-core solution."

    The answer to this question is trivial. More cores solves a problem that almost no-one has --- and a few enthusiasts screaming that their usage models can easily work out 10 or 20 or 100 cores is not going to change that. It was fairly easy for dual core CPUs to provide real value to most users --- with modern OSs, there's enough background work of one sort or another that this frequently pays off. Quad core is a much harder sell.

    In spite of Intel's work, Apple's GCD work, etc, highly threaded (or even slightly threaded) "core" apps remain rare. The main browsers make little use of more than two cores --- and the only reason they give two cores a workout is through the OS shunting graphics, OS work and some UI (all low CPU load) onto a second core. Launch apps is still too slow (anything where I have to wait is "too slow") but, as far as I know, dyload fixups are single threaded. iTunes (which yes, I know, is the crappiest "major app" in existence) resolutely uses only one core, even though parts of it appear to be multi-threaded.

    Given this situation, dual core with hyperthreading is good enough for almost every user. And it's going to remain that way until the major browsers become more threaded, iTunes becomes more threaded, Intels supposed "run a separate thread to pre-warm caches" technology ever becomes real, etc etc.
    This is a fact. I run Mathematica, so would consider myself a power user, and I'll be getting a hyperthreaded quadcore Ivy Bridge iMac next year, but I fully expect that 99% of the time it will have 2 threads or less active --- even Mathematica will only exercise eight threads for rare operations.

    So it makes NO sense for Intel to spend effort on CPUs with more cores. Far more sensible is to concentrate on

    - single threaded performance (tough, and you can't fault them for the work they have done so far)

    - power usage (again tough, again they've been doing a really good job --- though one suspects there is room for a BIG.little strategy on their CPUs --- essentially the equivalent of turbo-ing down)

    - special purpose hardware that can solve real problems. This is harder in their world than in the phone/tablet world because on phones/tablets one expects that only a single app will use this special hardware, that background apps will shut down, etc --- whatever is most convenient to get the feature to work. We have some of this with AES and QuickSync.
    Even so --- if they had a low-power CPU on board that could run the background OS stuff, plus dedicated HW to play movies and music, would that help use cases like light web browsing or email while listening to music, or full screen movie viewing?

    - I suspect there is scope for Intel to conserve power outside the CPU in RAM. One could imagine the memory controller (more or less in conjunction with the OS) being allowed to decide that only one (of two) or 1, 2, 3 (of 4) memory DIMMs really needed to be powered up, given the current active working set, and so shutting the other down.

    The basic issue is --- expending transistors on what people can't use is foolish. People want
    - single threaded performance
    - low power
    Spend transistors on THOSE.
  • hardrock_ram - Tuesday, December 06, 2011 - link

    This of course makes perfect sense. Besides, if Intel adds more cores to their dekstop CPU`s, they have to add even more in their Xeon lineup. Many people, including myself, build workstations with Xeon and Opteron partly to get cores. We pay a premium that Intel will loose otherwise. Xeon obviously have advantages beyond cores, but people like me, who use them like a generic network node in 3D rendering (not critical) might be inclined to buy desktop counterparts if they would offer the same amount of "power".

    Just my 100 dollars ...
  • bigboxes - Tuesday, December 06, 2011 - link

    I've been waiting until Ivy Bridge to upgrade my Nehalem processor. I really wanted to go to a hex-core processor, but wanted to do so with Ivy Bridge. Man, I wish AMD was more competitive with Intel as they are with nVidia. Reply
  • charleski - Tuesday, December 06, 2011 - link

    Everything I see about IVB makes me think that this design is really targeted at laptops. Lower TDP and improved IGP make it very desirable for power-constrained situations, but doesn't really offer anything for the desktop.

    The only non-mobile systems that would benefit at all would be HTPCs.

    Laptops are, of course, a massive market that dwarfs the enthusiast power/speed-hungry segment, so I can't blame Intel for this, especially when SNB will be trouncing AMD for performance for the foreseeable future. But I think it's clear that if you're building a desktop there's very little point waiting around for IVB since Intel isn't planning any major jumps in performance for this sector over the next 18 months.
  • iollmann - Wednesday, January 04, 2012 - link

    Desktops are dead, man! They just don't know it yet. Desktops are being disrupted by portables which are in turn being disrupted by post-PC devices. Once the economies of scale drop off, it is good bye big iron.

    It's probably for the best. Too much power is being wasted in our global obsession with computing. There is no reason to have a 500W space heater under your desk when a new 35W machine is just as good.
  • dealcorn - Wednesday, December 07, 2011 - link

    Prior to convicting Intel of some sort of core deficiency at 22nm, it may be helpful to see what MIC brings to the table. Is MIC helpful in video trans-coding workloads, for example? Reply
  • Dribble - Wednesday, December 07, 2011 - link

    The IB quads look barely faster then the SB ones. The one reason for producing a hex core would be to give us SB owners a good reason to upgrade, and hence spend money on Intel products. Reply
  • Death666Angel - Wednesday, December 07, 2011 - link

    Any idea how useful the IGP of IVB will be for desktop gamers? Will there finally be a reliable technology that will let us disable the dGPU in normal workloads or use the IGP for QuickSync....? Reply
  • Wolfpup - Thursday, January 05, 2012 - link

    Ugh. Enough for at least another core and some cache. At best that sits idle. At worst you're stuck with "switchable" graphics nonsense.


    It does give AMD a better shot-if Intel's wasting that much die area, AMD can sell something with the same useful number of transistors for less, or pocket more money, or put more transistors towards useful work, etc.

    I mean geez... 16PCIe lanes? All those transistors wasted on Intel video? I hate how they're trying to shove that down our throats.

    Heck, my G74 notebook counts towards Intel's video sales figures, I suppose...

Log in

Don't have an account? Sign up now