Back to Article

  • gregounech - Saturday, June 01, 2013 - link

    Finally, let's see how good Haswell is. Reply
  • Krysto - Saturday, June 01, 2013 - link

    Disappointing, to say the least. He's even comparing it to 2-3 older generations, just to be able to write some non-embarrassing numbers in the review, considering Haswell is only like 5-10% faster than IVB.

    The only big improvement seems to be in idle power consumption, of about 30%, which seems to "impress" Anand, but it just means that if your laptop had a 12h idle time, now it gets 16h.

    It won't do much in ACTIVE power, which is really what matters. So much for all the "Haswell will totally dominate tablets in the near future" hype from Anand. Yes, this is not the mobile version, if Haswell really were an impressive design for power consumption, you'd see it here, too. It actually consumes 5-10% more than same clock speed IVB chip, which means its extra performance is almost completely negated.

    This means that my predictions that Intel will try to "trick" us into thinking Haswell is ready for tablets will soon come true. Because if Haswell is not that efficient to warrant being used in "normal" tablets, then they'll try to dramatically lower clock speed and performance to even achieve 10W TDP (still too high for a tablet).
  • gregounech - Saturday, June 01, 2013 - link

    Pretty good points.

    I'll still upgrade from my i5 750, and we won't get anything interesting until Skylake on the desktop (apparently, it wont be BGA), as I'm expecting motherboard OEMs to force us into buying their high end motherboards with any of the high end i7s Broadwell.
  • Samus - Sunday, June 02, 2013 - link

    I'm using an i7-950 (over 4 years old) and its funny seeing how still-competitive it is to Intel's newest chips. It seems Sandy Bridge brought the bang and its just been trickle down performance since... Reply
  • Deelron - Sunday, June 02, 2013 - link

    No kidding, I'm in the same boat and was thinking of upgrading but with my moderate over clock and these results have no problem waiting until the next generation. Reply
  • klmccaughey - Monday, June 03, 2013 - link

    I'm still running my i5-2500k @ 4.3GHz and see nothing here of interest. I don't see myself upgrading any time soon. Reply
  • Hrel - Monday, June 03, 2013 - link

    I'm still running an E8400. I see plenty of reason to upgrade. Reply
  • dananski - Monday, June 03, 2013 - link

    Ahh I had one of those until this time last year. The E8400 found a happy home with a friend and I went to Ivy Bridge. Even small improvements like Sandy -> Ivy -> Haswell are useful, so don't feel too bad for having waited so long. Reply
  • vol7ron - Tuesday, June 04, 2013 - link

    haha. I'm still running an OC'd E6600 (amongst others) for my desktop, which I hardly touch anymore, and am unsure whether to upgrade. I'm curious to see how Haswell performance vs power consumption vs price, is for NAS systems.

    Aside from natural degradation in one of the components (cpu, memory, psu, or gpu), which brings on an error every once in a while, the E6600 still does 98% of what I want of it and 100% of what I need it to do. So I suppose my needs are to get my electric bill down.

    I need to read more because I was hoping this chip would bring me to buy a Surface Pro.
  • slickr - Wednesday, June 05, 2013 - link

    You will get benefits upgrading from E6600 no doubt about it. For you its worth it as you are going to get a lot less power consumption, cooler operating chip and less noisy as well, with a significant performance increase especially in multithreaded applications, but for those thinking upgrading from Sandy or Ivy bridge to Haswell its worthless.

    I mean upgrading from 2500k to the new 4770k is useless. At best you are going to see 40% performance improvement which calculates into 5 seconds faster decoding or 5 seconds faster unzipping, but at worst you are going to get 2% performance increase which amounts to milliseconds of faster decoding and stuff.

    You are not going to get less power consumption and it seems you may even get worse power consumption at loads. The new chips don't even overclock as well either, so its a waste of money to upgrade. If you have 3-4 generations older chip its worth upgrading, but else your money is better spent elsewhere.
  • IUU - Thursday, June 06, 2013 - link

    These are some very good processors. What is misplaced though is the graphics part. It is ok for mobile processors to have such a part, but for high energy consuming desktop processors, it is irrelevant and more of a burden. In the desktop world you expect a cpu with the highest possible processing power. It is 2013 and quad core is kind of old, come on intel don't be shy bring 8 and 10 core processors, you have the room to accommodate it. If I want graphics I 'll buy a monster graphics chip and will not be bothered by "optimal" power consumption.
    It is sad to see the energy efficiency and the computing efficiency improving only to accommodate a miserable graphics card. If you want us to buy graphics together with the cpu(an apu) make a chip with a tdp 300-400 because this is a mainstream energy budget for the desktop.
    If it is impractical(which is probably the case) build your own discrete graphics.
    Don't vandalize your cpus this way. If you want the mobile go for it, just don't contaminate your desktop products.
  • desky - Saturday, June 08, 2013 - link

    same with me!
    I'm currently running an E8400@4,5GHz since 2008, and now seems a good time to upgrade.
    I hope the haswell chip will overclock just as good. From what I've read that means that I may have to exchange the thermal paste under the IHS for some coolaboratory stuff...
    Not decided on i5 or i7 yet though...
  • cmdrdredd - Monday, June 03, 2013 - link

    You shouldn't have expected a reason to upgrade yet. Sheesh...we knew it would not be a real upgrade over an SB or IB CPU. Reply
  • peckiro - Monday, June 03, 2013 - link

    I built a new from the case up Z77 rig with a 3570K processor running 24/7 @ 4.4GHz last fall. I seriously considered waiting for Haswell to hit the market before upgrading my machine (circa 2007). I'm glad I didn't wait any longer since, I have been running a damned fast machine for 8 months now. I think I may have felt perhaps a little bit underwhelmed if I had waited for Haswell. Just my 2 cents worth. Reply
  • cmdrdredd - Monday, June 03, 2013 - link

    Yep...I built up my current 3570k system around the summer last yearyear Reply
  • kaiserreich - Saturday, June 01, 2013 - link

    While you are right that load power consumption is higher, the system is not loaded all the time.
    Not for the common users any way. In that case, idle will serve as a better power performance indicator as opposed to load consumption.
  • tential - Saturday, June 01, 2013 - link

    Pulling up this website and browsing my CPU utilization is under 20% and under 10-5% the majority of the time. Now I'm 5 years back on a Core2Duo. I'd imagine by now, if I was browsing this page on a Haswell, it'd be even lower considering processors are much faster as well. So mobile side I'm guessing that average battery life increased quite a bit. We'll see with the review. Using LOAD power consumption to judge mobile though is just plain ignorant.

    This is why some people need to stick to reading reviews, and wait for full reviews come out, rather than jumping to early conclusions and misinterpreting information.
  • t.s - Sunday, June 02, 2013 - link

    So, what about people that want to gaming on laptop? Reply
  • takeship - Sunday, June 02, 2013 - link

    Did you realize that these are all desktop chips? Let's not judge using desktop parts. But seriously, what percentage of users game with a mobile quad core & intel graphics? Reply
  • krumme - Saturday, June 01, 2013 - link

    5-10% is on the positive side imho.
    Sometimes it can even be slower :)
    This processor is not intented for desktop. There is nothing wrong in that. But why all this bs covering it. Its a fine processor and a weird and shallow review.
  • JDG1980 - Saturday, June 01, 2013 - link

    Those tests you linked are purely synthetic benchmarks. Has anyone found a significant real-world application where Haswell is slower than SB/IB at the same clock speed? Reply
  • krumme - Saturday, June 01, 2013 - link

    Has anyone found a significal real-world application where Haswell matter? Reply
  • smilingcrow - Saturday, June 01, 2013 - link

    Has anyone found a significant real-world comment from you that matters? Reply
  • rupert3k - Saturday, June 01, 2013 - link

    *high fives* Reply
  • Aditya211935 - Sunday, June 02, 2013 - link
    This One is pretty nice.
    Altough the list of games given there is a bit shallow but still it gives you a general idea.
  • jeffkibuule - Saturday, June 01, 2013 - link

    Haswell isn't just about the CPU, it's about the entire platform of chips, even the tiny ones on the motherboard no one really cares about. And I'm not sure what you were expecting in terms of performance gains, as there isn't any competition in the desktop arena for it to be worth pushing out 20-25% gains (and certainly not on a yearly basis).

    As far as active power goes, the entire point of a modern CPU architecture is to "hurry up and go to sleep (HUGS)". Faster you go to idle and the better idle performance you have, the better battery life you get because lets face it, unless you're encoding a video or playing a game, your CPU spends most of its idling doing nothing.
  • B3an - Saturday, June 01, 2013 - link

    You're right. If anyone is to be blamed for the slight performance increase of Haswell, it's AMD. If AMD could actually compete on performance we would be seeing more gains with Haswell, and maybe FINALLY more than 4 cores on mainstream Intel platforms. Reply
  • DeadlyRicochet - Tuesday, June 04, 2013 - link

    And I wonder how successful AMD would have been if Intel never stole their revenue (when AMD chips were faster than Intel, around 2006) by making under the table deals with OEMs to use Intel only CPUs. Reply
  • deepblue08 - Saturday, June 01, 2013 - link

    I disagree somewhat. Especially with the fact that you think idle power consumption is unimportant. Keep in mind, that at least half of the time, when you are working, browsing the net, your cpu is in fact idle. So an improvement in idle performance consumption is a big deal imo. Reply
  • Hector2 - Saturday, June 01, 2013 - link

    You've got it all wrong. Haswell won't be going up against ARM in tablets. Don't look for it in smartphones either ! LOL A Haswell i7 is serious overkill for those markets. ARM won't be getting into the serious laptop & PC market soon either --- they just can't compete well with Intel there. Intel is targeting their smaller & less power-hungry Atoms for the phone & tablet markets (surely you know this ?) that are driven by low power requirements. The press just leaked that Samsung's new Galaxy Tab 3 will have dual core Clover Trail+ Atoms Reply
  • Da W - Saturday, June 01, 2013 - link

    It confirms Temash tablets will be the GPU+CPU performace / power / price ratio to beat. Reply
  • takeship - Sunday, June 02, 2013 - link

    At 8-15W. What size market is that again? It's like saying Amtrak has a better cost/distance than a Prius. Yes, but so what? Reply
  • Dal Makhani - Saturday, June 01, 2013 - link

    its not disappointing at all, its gains, and any gains matter on an annual schedule. As long as it beats Ivy by any percentage, its progress. You know Intel's goals are not IPC related as much as mobile, so dont rant when all the facts are in front of you. Reply
  • peterfares - Saturday, June 01, 2013 - link

    You must have missed the part where S0ix isn't available on the desktop parts. How about you wait until the MOBILE and ULV processor tests are in before you start ranting. Reply
  • Jammrock - Saturday, June 01, 2013 - link

    The point of Haswell is not to drastically improve performance. Haswell is designed to move x86 into the tablet and mobile market with drastically improved idle and low power performance. Skylake, in roughly 2015, will likely be the next big performance boost. Reply
  • Hector2 - Sunday, June 02, 2013 - link

    "Haswell" isn't going into tablets Reply
  • Klimax - Sunday, June 02, 2013 - link

    It does - Surface Pro class.(TDP 10W) Reply
  • thebeastie - Sunday, June 02, 2013 - link

    Well I am happy to see the 4th gen release. And yay PCI is now officially gone. Shouldn't there be a memorial ceremony? And maybe a trophy? :)

    Kudos to the first posters looks like they some what actually read the review, even tho I don't know if I agree with your comments.
  • klmccaughey - Monday, June 03, 2013 - link

    Yes, it is great to see PCI finally dead and buried. It's been a bit like having a tow bar on a ferrari this last few years. Hoorah for the death of PCI!!! :) Reply
  • GullLars - Sunday, June 02, 2013 - link

    Where this will probably shine is in mixed workloads. Not overclocking for gaming or production.
    It will be easier to put Haswell into (G)HTPC builds at mini-ITX and µATX formfactors and keep noise down while still having great burst performance. The 4770K seems to be not worth it for overclockers that have got good Sandy/Ivy chips.

    I think i may upgrade my parents living room PC to something like a mini-ITX build with i3-42xxT, and just transfer the SSD (Force GT 120GB) and RAM (8GB 1600 SO-DIMM). It should be a substantial upgrade from E-350 and almost fit the same power envelope for their use cases.

    I'm looking more forward to more info on Ivy-E. I'm happy with my 3930K with a decent OC, but if Ivy-E can bring the power/performance ratio down without bringing performance down or heat issues, i might upgrade :)
  • Iketh - Sunday, June 02, 2013 - link

    Your post is so ignorant that you should have posting privileges revoked. Reply
  • gryer7421 - Monday, June 03, 2013 - link

    ? Everyone knew this was intels piledriver revision. Reply
  • Donkey2008 - Monday, June 03, 2013 - link

    When you are "cynical" you will "see" nothing "good" in anything. Reply
  • jonjonjonj - Tuesday, June 04, 2013 - link

    i agree. its annoying that intel is designing their desktop cpu's to also compete with arm in mobile. why can't intel develop 2 different versions or architectures? is power efficiency the limiting factor and if TDP and power was no concern how much better could intel do?

    i personally don't care about power on my desktop as long as the performance justifies it. desktop cpu's should be about performance not saving power.
  • Death666Angel - Tuesday, June 04, 2013 - link

    How can you be disappointed when Haswell gives you exactly what was promised? It seems like you should have adjusted your expectations. Or you just like to be disappointed.

    I'm not surprised by these numbers, they are what was expected. I'm still running an i7-860 @3.8GHz and when I get enough money I'll upgrade to an i7-4770k and hopefully be able to run it at 4.5GHz, give or take some (water cooling setup here). Maybe IVB-E if it tests well and money is not too tight.

    Anand really needs a Lynnfield for comparisons, because the i7-9xx was geared towards people running the enthusiast platform, whereas all the other CPUs tested here are geared towards mainstream high end.
  • ninjaquick - Wednesday, June 05, 2013 - link

    Haswell at 1.8 GHz is a completely different story to any of this... Mark my words: You will never see 3.4 GHz parts in tablets, ever.

    However, a 30% improvement in efficiency can be roughly translated to a 30% increase in performance per watt. That is massive in tablets.

    Sure, it won't be a 3.4 GHz tablet part, but it will also be a bit quicker than a 1.6 GHz tablet part.
  • mkygod - Wednesday, September 18, 2013 - link

    Of course they are comparing to older CPUs because the article made a point to say that it does not make a whole lot of sense to upgrade from Ivy Bridge. But still, 5-10% faster compared to Ivy bridge is pretty good i would say for the extra ~$10-20 dollar difference. Reply
  • boe - Monday, June 03, 2013 - link

    Its just as good as ivy bridge - pretty much ivy bridge pretending to be something new. Reply
  • Dnann - Friday, June 14, 2013 - link

    It seems that not as good as I was imagining. :O Reply
  • Dewend - Friday, March 04, 2016 - link

    Living was completely turned upside down, when my hidden troubles stumbled on surface. They solely appeared to make things worse, although i had tried several solutions. I am just very happy to are convinced that this website changed living for that better, however. I am now able to carry out things that has a clear head. Reply
  • rudolphna - Saturday, June 01, 2013 - link

    Honestly, the best part of the review was the comparisons to older chips. It's entertaining to see just how terrible the Pentium 4 was in hindsight. Reply
  • jeffkibuule - Saturday, June 01, 2013 - link

    I wouldn't say that Pentium 4 was terrible, but their 2004-2006 exercise of continually pumping up clocks was misguided. Reply
  • Nfarce - Saturday, June 01, 2013 - link

    Exactly. As someone who still has my P4 Northwood 3.06GHz (with HT) as a general use PC, I loved it. It served as my main gaming and photo/video editing PC back in the day, and was only replaced with a C2D E8400 overclock build four and a half years ago (which was replaced two years ago with a SB 2500k build). Anyone who says the P4 was terrible is either an AMD fanboy trolling or never had one at the time. Reply
  • bji - Saturday, June 01, 2013 - link

    By any reasonable metric, P4s were pretty bad. Glad you like yours but that's mostly because even back in the P4 days CPUs were already "fast enough" most of the time for most tasks and you probably would have liked a Pentium M or Athlon just as well. P4s started out with very weak performance and were improved a decent amount during the lifetime of the architecture, but they were never spectacular performers vs. the competition and they were always extremely hot and power hungry. Also Rambus memory was a joke.

    More on topic, I'm not surprised that Haswell isn't significantly faster than Ivy Bridge. I said when Sandy Bridge came out that the x86 architecture would never get 50% faster per core than Sandy Bridge. With the combination of nearing the end of the road for process shrinking, the architecture itself already having been optimized to such a degree that any additional significant gains come at an extremely high transistor and R&D cost, the declining of importance of the x86 market as mobile devices become more prominent, and the "already much more than fast enough" aspect of modern CPUs for the vast majority of what they're used for, it's pretty clear that we'll never see significant increases in x86 speed again. There just isn't enough money available in the market to fund the extremely high costs necessary to significantly increase speed in a market where fast enough was achieved years ago.

    I'll stand by my statement of ~2 years ago: x86 will top out at 50% faster than Sandy Bridge per core.
  • nunomoreira10 - Saturday, June 01, 2013 - link

    Maybe not on the comon instruncion set, wich intel has already adress on haxwell, just wait for the software to update to avx2 and you will see how slow sandy bridge is by comparation Reply
  • klmccaughey - Monday, June 03, 2013 - link

    @bji: Totally agree. We are in the halcion days and I can't see the likes of the 4770k getting significantly more powerful any time soon. I believe it will take a huge technology breakthrough in terms of fab materials, along the lines of optical or biological chips. At least 10 years away.

    The corollary to this is that we don't actually really need any more power. We already have the level of "good enough" for the GPU (in gaming terms). In terms of compute power, that is definitely continuing in the concurrency paradigm - which is where it should be, it makes sense. Programmers (like myself) are proceeding along these lines to get more power.

    I think we are at either a pivotal point or a point of divergence again in computer technology. It's very exciting and interesting for me :)
  • jmelgaard - Sunday, June 02, 2013 - link

    Wait what... I must be an AMD fanboy then (although I love Intel and never owned an AMD >.<, lol)...

    Honestly, the P4 platform was terrible in many aspects, and yes I did own one, several actually (2.266, 2.4, 2.8)... But having a Dual Pentium III 1GHz at the time as well made it pretty obvious to me how bad the P4 really was... Granted all those P4 was at lower clocks than yours...

    But nothing is bad not to be good for something, after all intel's after the P4 generation has all been pretty amazing...

    More in the topic though, I am a bit dismayed and disappointed that the power consumption goes up compared to the last generation under load... Great that the idle power goes that much down, but I would rather see the exact same performance as 3rd gen and a huge power reduction... After all, performance wise I am still over satisfied with my i970... I don't feel like i need more juice, so I would rather save some bucks on the electrical bill... Obviously there will be different minds about that part... Just saying what I feel...
  • Donkey2008 - Monday, June 03, 2013 - link

    Weird how you keep saying how "bad" it was in it's time, yet you present no actual facts to back that up. About the only bad thing I ever saw with the P4 were high temps, which any decent HSF fixed. Reply
  • bji - Monday, June 03, 2013 - link

    It was so bad that Intel had to pay vendors not to buy the competitor's chips, an action that they were later sued for and settled to the tune of $1.25 billion.

    The P4 started out very badly; it was very power hungry and had weak performance compared to the competition. Intel was also the only company able to make chip sets for it (can't remember if there were technical or legal reasons behind this or both), and they refused to support any memory but Rambus (for a long time), further hurting their cause by propping up a company that is pretty much the dregs of submarine patent lawsuit filth.

    I can't think of any way in which the P4 was better than its competition of the day except that it had Intel's sleazy business practices behind it, if you consider that "better". It certainly played better in the marketplace, ethics notwithstanding.

    You may have been happy with your P4 because it did what you needed it to do. Awesome. Nobody is saying that the P4 didn't work or that it couldn't actually fulfill the duties of a CPU, we're just saying that compared to its contemporaries, it kinda blew chunks.
  • superjim - Wednesday, June 05, 2013 - link

    I had two P4 chips (2.4 Northwood and 3.0 Prescott) along with many Athlon XP systems (Palomino, Thoroughbred and Barton) and the Athlon's beat the P4s in nearly every metric. Then came the Athlon 64 to solidify AMD's crown. It wasn't until the original Core (Conroe) chips when Intel came screaming back and have held it since. Reply
  • Donkey2008 - Monday, June 03, 2013 - link

    "Anyone who says the P4 was terrible is either an AMD fanboy trolling or never had one at the time. "


    My Northwood 3GHz was as fast, stable and solid as any CPU I have ever owned. Performed slightly slower than an equivalent A64, but nothing noticable to the human eye. Maybe these people who bag on it have bionic eyes.
  • bji - Monday, June 03, 2013 - link

    +10 false dichotomy. Look it up. Reply
  • kenjiwing - Saturday, June 01, 2013 - link

    Any reviews comparing this gen to a 980x?? Reply
  • Ryan Smith - Saturday, June 01, 2013 - link

    It's available in Bench.
  • owikh84 - Saturday, June 01, 2013 - link

    4560K??? Not 4770K & 4670K? Reply
  • karasaj - Saturday, June 01, 2013 - link

    4670K is the Haswell equivalent of a 3570K. Reply
  • hellcats - Saturday, June 01, 2013 - link

    I read with some concern that the TSX instructions aren't going to be available on all SKUs. This is the main thing that I've been looking forward to on Haswell! Not providing the capability across the family is reminiscent of the 486SX/DX debacle. TSX could be huge for game physics as it would allow for far more consistent scaling. I know it is supposed to be backwards compatible, but what's the point of coding to it if it isn't always there? Reply
  • zanon - Saturday, June 01, 2013 - link

    Agreed, TSX is one of the most interesting parts of Haswell so I'm sorry not to see it get more discussion. And as you say (and like with VT-d or other tech) I think Intel is being stupid and self-defeating by trying to make it an artificial differentiator. Unlike general basics of a chip such as clock rate, cache, hyperthreading or raw execution resources these sorts of features are only as valuable as the software that's coded for them, and nothing kills adoption amongst developers like "well maybe it'll be there but maybe not." If they can't depend on it, then it's not worth spending much extra time with and tremendously limits what it can be used for. That principal shows up over and over, it's why consoles can typically hold their own for so long. Even though on paper they get creamed, in reality developers are actually able to aim for 100% usage of all resources because there will never be any question about what is available.

    For features like this Intel should aim for as broad adoption as possible, or what's the point? They can differentiate just fine with pure performance, power, and physical properties. Disappointing as always.
  • penguin42 - Saturday, June 01, 2013 - link

    Agreed! I'd also be interested in seeing performance comparisons with a transactionally optimised piece of code. Reply
  • Johnmcl7 - Saturday, June 01, 2013 - link

    Definitely, I was a bit puzzled reading the review to find barely a mention of TSX when I thought it was meant to be one of the ground breaking new features on Haswell. Even if there was only a synthetic benchmark for now it would be extremely interesting to see if it works anything like as well as promised.

  • bji - Sunday, June 02, 2013 - link

    TSX is so esoteric in its applicability that I think you'd be very hard pressed to a) find a benchmark that could actually exercise it in a meaningful way and b) have any expectation that this benchmark would translate into any actual perceived performance gain in any application run by 99.999% of users.

    In other words - TSX is only going to help performance in some very rare and obscure types of software that "normal" users will never even come close to using, let alone caring about the performance of.

    However I am intruiged by your speculation that TSX will be beneficial for physics simulation, which I guess could translate to perceivable performance increases for software that end users might actually use in the form of game physics. I found a paper that described techniques for using transactional memory to improve performance for physics simulation but it only found a 27% performance increase, which is not exactly earth shattering (I wouldn't call it "huge for game physics" personally).
  • Amaranthus - Monday, June 03, 2013 - link

    One of the main (and already implemented) uses of TSX is hardware lock elision. I'd guess the hypothesis is that physics code takes locks defensively but rarely actually have contention because they're working on different parts of the world. In this scenario more fine grained locks on sections of the world would let you scale better but that is a lot of work and HLE gives you the same benefit for free. Reply
  • Jaybus - Monday, June 03, 2013 - link

    No. HLE (XACQUIRE and XRELEASE) do nothing by themselves. They reuse REPNE/REPE prefixes and on CPUs that do not support TSX are ignored on instructions that would be valid for XACQUIRE/XRELEASE if TSX were available. It is a backward compatibility method. Since all of those instructions may have a LOCK prefix, without TSX capability, a normal lock is used, NOT the optimistic locking provided by TSX that allows other threads to see the lock as already free.

    Without TSX the code is still (software) lock-free, but there is no possibility of multiple threads accessing the same memory simultaneously (as there is with TSX), so one or more threads will see a pipeline stall due to the LOCK prefix.
  • bji - Monday, June 03, 2013 - link

    I can't imagine that lock elision is that beneficial to very many applications. Lock contention is almost never a significant performance bottleneck; yeah there are poorly designed applications where lock contention can have a more significant effect, but proper multithreaded coding has the contended sections of code reduced to the smallest number of instructions possible, at which point the effects of lock contention are minimized.

    In order to take advantage of transactional memory and get the full benefits of TSX you have to write such radically different algorithms that I doubt that it's worth it except in the most unusual and specific cases. OK so you can use TSX instructions to make a hashtable or other container class suffer slightly less from lock contention, but that is oh so very rarely a significant aspect to the performance of any program.
  • klmccaughey - Monday, June 03, 2013 - link

    As a programmer, I disagree. This is a very useful feature set that, if it was more widely adopted, would prove very useful for many workaday tasks that the CPU performs. Reply
  • bji - Monday, June 03, 2013 - link

    As a programmer, I am pretty sure that the benefits of TSX are limited to a very unusual and uncommon set of problems the performance increase of which will mean very very little to 99.99% of users 99.99% of the time. Also fully transactional memory algorithms require significant rework from their non-transactional counterparts meaning that taking full advantage of TSX takes developer effort which will not be worth it except in very rare circumstances.

    The HLE instructions may have some very minor benefit because they can be used with algorithms that don't need to be reworked at all (you just get a little bit more parallelism for free), but even then you're going to be avoiding some lock contention; even if you completely eliminated lock contention from most algorithms they would only be fractionally faster in real world usage. Lock contention just isn't that big of a deal in normal circumstances.
  • klmccaughey - Monday, June 03, 2013 - link

    Exactly. It would be the ubiquity of these features that would cause them to be useful - splitting them into segments defeats the adoption and use of said features. Intel are pushing segmentation too hard (too greedily?) Reply
  • bill5 - Saturday, June 01, 2013 - link

    kind if a weird, scattered review.

    loved the q6600 though, since i still have one. and the 8350, since i have my eye on it.

    be interesting if this pushed 8350 prices down enough to be more attractive (it's currently only 180 on newegg). if not i'll probably go with i5 4670 (even though i'm getting tired of these faux msrp's, bet money that chip will be 229 on newegg forget 213)

    ps, my bill4 account was apparently banned (it kept saying i was posting spam wouldn't allow me to post) i post controversial things that probably get downvoted, but they arent spam. please stop doing that.
  • bill5 - Saturday, June 01, 2013 - link

    this really shows where the 8350 fails, single thread.

    it looks like clock for clock it's ipc may be similar to my q6600. it only gains in single thread due to the gaudy 4.0 clock speed.

    otoh go to multithread and it holds it's own against the other ~200 intel chips.
  • Nexus-7 - Saturday, June 01, 2013 - link

    In for one 4770k --

    I'm coming from an i5-750 running at 4GHz. I'm thinking this will be a sufficiently large leap forward although I'm tempted to wait for Ivy Bridge-E's 6c12t monsters.
  • chizow - Saturday, June 01, 2013 - link

    Similar boat, but I told myself I wouldn't wait for Intel's E platform anymore. X58 may be the last great E platform, mainly because it actually preceded the rest of the mainstream performance parts. Intel seems to sit on these E platforms for so long now that they almost become irrelevant by the time they launch. Reply
  • Nacho - Saturday, June 01, 2013 - link

    Maybe it's time to upgrade my C2D E4300? :P Reply
  • krumme - Saturday, June 01, 2013 - link

    Absolutely, go get a good ssd and this processor or IB, if its for desktop. Reply
  • Oscarcharliezulu - Monday, June 03, 2013 - link

    Do it. Reply
  • Boissez - Saturday, June 01, 2013 - link

    So my 2½ year old 2600K ($317) performs about the same as todays 4560K ($242). Color me underwhelmed.

    Meanwhile in mobileland we've went from 1 Ghz Tegra 2 to 2 Ghz Snapdragon S600 within the same timespan.
  • tential - Saturday, June 01, 2013 - link

    Because for 99.9% of the population what's out there today is more than fast enough. Hell, the Core2Duo Conroe/Penryn processors are fast enough for most people today. I'm still using one in fact.

    On the mobile side however, we have tons of applications that could use more power. My galaxy S3 takes a little to load up some games, and while the data may have been downloaded to the phone through wifi, it still isn't on my screen yet.

    I think it's pretty obvious why you see mobile land having to progress so fast while desktop processors are focusing on power consumption as the AVERAGE consumer (not people who are techies) would prefer smaller PCs and pushing more power efficient processors into smaller and smaller things like the intel NUC is what the consumer desires.

    In short:
    Your desires are the 1%. The 99% are being catered to.
  • klmccaughey - Monday, June 03, 2013 - link

    @tential: Your last statement, maybe it should read "OUR desires are the 1%"? ;) I bet we would all be clapping right now if the 4770k was a big upgrade. Well, most of us, I think? Reply
  • jeffkibuule - Saturday, June 01, 2013 - link

    Mobile is having the same performance renaissance that desktop chips had from 2004-2006 when we went from a hot, bloated Pentium 4 to a cool, efficient Core 2 Duo. And certainly we've had performance gains since then, but eventually the gains won't come so easily. You can start to see that a bit now with how the Exynos 5250 in the Nexus 10 is thermally throttled to 4W such that CPU and GPU can't be both running full tilt at the same time. Reply
  • Homeles - Saturday, June 01, 2013 - link

    You're disappointed because your understanding of physics and Moore's Law is poorly developed. The scenario you've provided is a blatant false equivalency.

    According to your desperate desires, the roughly 4GHz processors that launched with Sandy Bridge should be running at twice the clock speed today.

    When you understand that leakage power grows exponentially as transistor geometries shrink, and that power consumption raises exponentially as clock speed rises, you will realize that even the 10% gains that Haswell makes here are a big deal.
  • Dal Makhani - Saturday, June 01, 2013 - link

    Homeles, I really appreciate your well said comment, im taking a business degree with an accounting major, but ive always loved building PC's as a hobby. When some of my computer science/engineer friends try to show me the stuff they are learning, i am baffled as its not my area of expertise. I can only imagine how challenging it is to combat the shrinking processes and make performance gains as you said. I have deep respect for Intel and AMD, always trying to utilize their research and engineers to try and make any gains for society. These forum people are just so ignorant sometimes and it baffles me. Reply
  • chizow - Saturday, June 01, 2013 - link

    Hey, similar path as me. :) Don't worry about lack of understanding now, stick to it, keep reading great technical sites like AT, keep an open mind, and you'll get a really good grip on the industry, especially if you are an actual user/enthusiast of the products. Reply
  • chizow - Saturday, June 01, 2013 - link

    The other big problem with the CPU space besides the problems with power consumption and frequency, is the fact Intel has stopped using it's extra transistor budget from a new process node on the actual CPU portion of the die long ago. Most of the increased transistor budget afforded by a new process goes right to the GPU. We will probably not see a stop to this for some time until Intel reaches discrete performance equivalency. Reply
  • Jaybus - Monday, June 03, 2013 - link

    Well, I don't know. Cache sizes have increased dramatically. Reply
  • chizow - Monday, June 03, 2013 - link

    Not per core, these parts are still 4C 8MB, same as my Nehalem-based i7. Some of the SB-E boards have more cache per core, 4C 10MB on the 3820, 6C 15MB on the 3960/3970, but the extra bit results in a negligible difference over the 2MB per core on the 3930K. Reply
  • Boissez - Sunday, June 02, 2013 - link

    I think you've misunderstood me.

    I'm merely pointing out that, in the past 2½ years we've barely seen any performance improvements in the 250-300$ market from Intel. And that is in stark contrast to the developments in mobileland. They too, are bound by the constraints you mention.

    And please, stop the pompous know-it-all attitude. For the record, power consumption actually rises *linearly* with clock speed and *quadratically* with voltage. If your understanding of Joule's law and Ohm's law where better developed you would know.
  • klmccaughey - Monday, June 03, 2013 - link

    Exactly. And it won't change until we see optical/biological chips or some other such future-tech breakthrough. As it is the electrons are starting to behave in light/waveform fashion at higher frequencies if I remember correctly from my semiconductor classes (of some years ago I might add). Reply
  • Jaybus - Monday, June 03, 2013 - link

    Yes, but we will first see hybrid approaches. Intel, IBM, and others have been working on them and are getting close. Sure, optical interconnects have been available for some time, but not as an integrated on-chip feature which is now being called "silicon photonics". Many of the components are already there; micro-scale lenses, waveguides, and other optical components, avalanche photodiode detectors able to detect a very tiny photon flux, etc. All of those can be crafted with existing CMOS processes. The missing link is a cheaply made micro-scale laser.

    Think about it. An on-chip optical transceiver at THz frequencies allows optical chip-to-chip data transfer at on-chip electronic bus speeds, or faster. There is no need for L2 or L3 cache. Multiple small dies can be linked together to form a larger virtual die, increasing productivity and reducing cost. What if you could replace a 256 trace memory bus on a GPU with a single optical signal? There are huge implications both for performance and power use, even long before there are photonic transistors. Don't know about biological, but optical integration could make a difference in the not-so-far-off future.
  • tipoo - Saturday, June 01, 2013 - link

    It's easier to move upwards from where ARM chips started a few years back. A bit like a developing economy showing growth numbers you would never see in a developed one. Reply
  • Genx87 - Saturday, June 01, 2013 - link

    Interesting review. But finding it hard to justify replacing my i2500K. I guess next summer on the next iteration? Reply
  • kyuu - Saturday, June 01, 2013 - link

    Agreed, especially considering Haswell seems to be an even poorer overclocker than Ivy Bridge. My i5-2500k @ 4.6GHz will be just fine for some time to come, it seems. Reply
  • klmccaughey - Monday, June 03, 2013 - link

    Me too. I have a 2500k @ 4.3Ghz @ 1.28v and I am starting to wonder if even the next tick/tock will tempt me to upgrade.

    Maybe if they start doing a K chip with no onboard GPU and use the extra silicon for extra cores? Even then the cores aren't currently used well @ 4. But maybe concurrency adoption will increase as time goes by.
  • Ninokuni - Saturday, June 01, 2013 - link

    "The new active idle (S0ix) states are not supported by any of the desktop SKUs"

    Does this mean the whole PSU haswell compatability issue is now irrelevant?
  • jhoff80 - Saturday, June 01, 2013 - link

    I believe that PSU issue was in relation to the C6 and C7 states, not S0ix. Reply
  • smilingcrow - Saturday, June 01, 2013 - link

    Correct as desktop PSUs are used with desktop CPUs. :) Reply
  • Egg - Saturday, June 01, 2013 - link

    To reiterate owikh84, there appears to be a serious typo in the title - it should be 4670k, not what appears to be a nonexistent 4560k part. Reply
  • RaistlinZ - Saturday, June 01, 2013 - link

    This review was a bit more positive than the one at Guru3d. Their numbers show the performance difference being almost nothing. Guess my i7-930 @4Ghz will dredge on for another generation. Reply
  • chizow - Saturday, June 01, 2013 - link

    I'd consider it but the IPC gains alone since Nehalem make the upgrade worthwhile, and the rest of the X58 specs are already sagging badly. It still does OK with PCIE bandwidth due to the 40 total PCIE 2.0 lanes, but 20 PCIE 3.0 lanes is equivalent with PCIE 3.0 cards. The main benefit however is running SATA 6G for my SSDs and gaining some USB 3.0 ports for enclosures, etc. They have been bottlenecked for too long on SATA 3G and USB 2.0.

    I would consider waiting for IVB-E or Haswell-E but Intel always drags their feet and ultimately, these solutions still end up feeling like a half step back from the leading edge mainstream performance parts.
  • zanon - Saturday, June 01, 2013 - link

    Intel's artificial segmentation of some features are more irritating then others, but not including VT-d everywhere really, really sucks. An IOMMU isn't just helpful for virtualization (and virtualization isn't just a "business" feature either), it's critical from a security standpoint if an OS is to prevent DMA attacks via connected devices (and just helping increase stability also). It should be standard, not a segmentation feature. Reply
  • klmccaughey - Monday, June 03, 2013 - link

    Yea, I just don't get why they dropped VT-d. Really silly. A lot of power users on the desktop use virtualisation. Reply
  • Chriz - Saturday, June 01, 2013 - link

    I'm curious about something. If you say Intel got rid of legacy PCI support in the 8 series chipsets, why am I still seeing PCI slots on these new motherboards being released? Are they using third party controllers for PCI? Reply
  • CajunArson - Saturday, June 01, 2013 - link

    "Are they using third party controllers for PCI?"

  • smoohta - Saturday, June 01, 2013 - link

    Blah, seems like a rather shallow review:
    1. What about benchmarks to take advantage of the new AVX2 instructions? (FMA specifically would be interesting)
    2. Same for TSX?
  • Klimax - Sunday, June 02, 2013 - link

    I know only about x264 having it in the last versions. Not sure who else has it. Reply
  • Gigaplex - Saturday, June 01, 2013 - link

    "Here I’m showing an 11.8% increase in power consumption, and in this particular test the Core i7-4770K is 13% faster than the i7-3770K. Power consumption goes up, but so does performance per watt."

    So... performance per watt increased by ~1%. For a completely new architecture that's supposedly all about power optimisation, that's extremely underwhelming to say the least.
  • Homeles - Saturday, June 01, 2013 - link

    Haswell is not focusing on the desktop I'm not sure how you managed to believe that it is. Reply
  • krumme - Saturday, June 01, 2013 - link

    Because Anand is a fan of it, even at desktop? Reply
  • MatthiasP - Saturday, June 01, 2013 - link

    So we get +10% performance increase for +10% increase in energy consumption? That's rather disappointing for a new generation. Reply
  • jeffkibuule - Saturday, June 01, 2013 - link

    Haswell is movinig voltage regulators that were already on the motherboard on die, so power consumption hasn't changed, it's just that the CPU cooling system has to deal with that extra heat now. Remember that those power ratings are NOT about how much power the chip uses, but how much cooling is needed. Reply
  • Homeles - Saturday, June 01, 2013 - link

    System power consumption with Haswell is, in fact, higher. Take a look at page 2.

    Still, when you're running at these kind of frequencies, 10% more performance for 10% more power is a big deal. If you were to hold back the performance gains to 0%, power savings would be greater than 25%.

    The only reason Piledriver was able to avoid this was because it was improving on something that was already so broken. AMD's not immune to the laws of physics -- when they catch up to Intel, they will hit the same wall.
  • Klimax - Sunday, June 02, 2013 - link

    Most likely sooner, because they can't fine tune process. Reply
  • dgz - Saturday, June 01, 2013 - link

    I agree but Intel has been doing that for many years. I just don't get what they're gaining by artificially restricting IOMMU support. Reply
  • CajunArson - Saturday, June 01, 2013 - link

    Great review, but I have a question about this rather cryptic comment for bclk overclocking:

    "All CPUs are frequency locked, however K-series parts ship fully unlocked. A new addition is the ability to adjust BCLK to one of three pre-defined straps (100/125/167MHz). The BCLK adjustment gives you a little more flexibility when overclocking, but you still need a K-SKU to take advantage of the options."

    Does that mean you cannot do bclk overclocking on the non-K series parts? For example, are you saying that a 4770 (non-K) part cannot be used with a bclk overclock? Or are you just saying that the K-series parts give you all the options including unlocked multipliers? Can you clarify this?
  • Rajinder Gill - Saturday, June 01, 2013 - link

    All you can do on the non K parts is 100 bclk +- 5%. Reply
  • smilingcrow - Saturday, June 01, 2013 - link

    It doesn't seem cryptic to me! Reply
  • kasakka - Saturday, June 01, 2013 - link

    Too bad there are no temperature comparisons. Would be interesting to see if Intel has improved the TIM under the heatspreader.

    That said, now I'm glad I didn't wait for Haswell as it doesn't seem to have much to give over Ivy unless you use the intergrated GPU.
  • A5 - Saturday, June 01, 2013 - link

    Other reviews have it as notably hotter under load, fwiw. Probably due to the voltage regulators. Reply
  • HisDivineOrder - Saturday, June 01, 2013 - link

    Now that AMD has mostly fallen back to the mid-range and low-end, this is a similar situation to where the new Geforces landed.

    You get a bit more performance for about the same money. For the GPU side, the benefit was mostly in superior cooling solutions all (supposedly) having to be equivalent to the excellent Titan Blower. For the CPU side, the benefit is that we have lower idles. These chips stay in idle a lot, so it's a gain, but this isn't a chip that's going to light the hobbyist world on fire.

    Just like with the GF770, you get more performance and a few fringe benefits (that should have been there all along, ie., 6 SATA3 connections) for the same as you would have paid for the equivalent part last week.

    I don't see much here to make me want to upgrade from my IVB 3750k, though. I'm leaning toward picking up a used GF670 and SLI'ing now, given all the givens.

    The truly disappointing part of all this is if this is truly the last new desktop release for two years. Imagine me going 3+ years before I even FEEL an itch to upgrade my processor. I sincerely pray that AMD gets its act together and puts some competitive pressure on Intel at the mid-high end (ie., 2500k, 3570k) with a truly great CPU. I live in hope that the 8350 successor (based on Steamroller?) will be that part, but AMD needs to update their chipsets big time.

    Until then, I think all we can expect from Intel and nVidia is more of the same, which is the worst part of both the 700 series and Haswell. Neither felt compelled to do more than offer minor improvements in performance because neither is feeling any competitive pressure of any kind.

    That's why Intel IS pushing the power argument and fighting that fight hard. Because ARM *is* applying competitive pressure.
  • Hector2 - Saturday, June 01, 2013 - link

    Even without competition, Intel is still by economics to keep pushing transistor sizes and die sizes smaller and smaller --- it still lowers their costs and they make more money. This also means they keep getting faster and require less and less power. What competition does, besides lower prices, is drive architectural changes that add more die size (like an integrated GPU and FIVR) Reply
  • JDG1980 - Saturday, June 01, 2013 - link

    Keep in mind that an increasing percentage of desktop/laptop PCs are now in the business world (since light-use consumers have often moved towards tablets and smartphones). If you're doing office work, then lower power use on idle/light load is a big deal. Office PCs almost never run balls-to-the-wall. In fact, usually the only time the CPU even comes close to being completely pegged is when the mandatory virus scan runs (and even then, it's often HDD-bound). Reply
  • leliel - Saturday, June 01, 2013 - link

    I'm still on a Lynnfield 750 (as are a few other commenters, I note) and this system is now 3.5 years old without me having had the itch to upgrade or even overclock the CPU. I have been eyeing Haswell because I know I will be making a fresh build at the end of the year, but that's due to circumstance and not need. 30% clock increase in four years is nothing like the old days... but frankly it's nice to be able to keep up in everything just by swapping out video cards. Reply
  • Klimax - Sunday, June 02, 2013 - link

    I doubt we will see large increases in future. We need new algorithms. (Current ones are the limit) Why? Because major performance increases would require significant increase in complexity and GPU showed what that causes.
    And AMD won't and cannot change it.
  • klmccaughey - Monday, June 03, 2013 - link

    AMD are a damn sight closer on Nvidia than they are to Intel, though still sandbagging for this year. It's a bad year for us upgraders! Reply
  • aCuria - Saturday, June 01, 2013 - link

    This review needs a compilation speed test against the 3930k, I would really like to know if haswell could edge out the 3930k in that test Reply
  • Kevin G - Saturday, June 01, 2013 - link

    Haswell met expectations in terms of IPC increases and power reductions. Both of those are good things overall. However, I feel disappointed and that comes down to how Intel has segregated their product line up: GT3e and TSX are only available on select parts. Ideally on the high end I'd like to get a socketed chip with an unlocked multiplier, GT3e, TSX, and Hyperthreading. Of those five criteria, at best I can get three of those. I suspect that this is due to Intel keeping several possible configurations reserved for their Xeon lineup but those chips won't have an unlocked multiplier.

    I'm currently an owner of a Sandybridge i7-2600K and the current performance of the Haswell parts aren't that tempting to jump the configurations Intel is selling. So I'm left waiting another year for a future desktop refresh before making the jump. Oh wait, Broadwell is going to be strictly a mobile refresh (and possibly a desktop BGA) refresh. So the best upgrade path for me for the next couple of years is to wait for a cheap i7-3770K on clearance. Otherwise the price/performance gains are radically higher as to not be worth it (also would need to get a new motherboard for socket 1150). I guess I'm left waiting for Skylake, get lucky that Intel adds several SKU's that I want or see what AMD can produce for the desktop.
  • Khato - Saturday, June 01, 2013 - link

    Just curious as to your source for TSX only being available on select parts? The only place I saw that was on the Tom's Hardware preview. It's not in any of the reviews that I've seen? Reply
  • amock - Saturday, June 01, 2013 - link

    According to the 4770k doesn't have TSX support. Reply
  • Kevin G - Saturday, June 01, 2013 - link

    Intel is likely keeping TSX away from any desktop part with eDRAM. I suspect that having a massive L4 cache and TSX may make these quadcore chips very competitive with some of their socket 2011 parts based upon Sandy bridge/Ivy Bridge cores. These would happen in heavily memory bound applications like some database operations tend to be. Intel hasn't updated ARK with all of the Haswell chips yet so I'm be curious to see if their will be a mobile part with GT3e + TSX. I'd love it if some enterprise DB's supporting TSX were tested on this platform to see if this idea pans out. Reply
  • smilingcrow - Saturday, June 01, 2013 - link

    The i5/i7 K series socketed desktop chips don't have eDRAM anyway so that's a moot point; they both lack TSX support. Reply
  • Kevin G - Monday, June 03, 2013 - link

    Well it does look like that the L4 cache in GT3e parts can make Haswell competitive with socket 2011 six core chips. Check out the i7 4950hq results from Tech Report's Euler 3D fluid dynamics tests:

    The i7 4950 was likely seeing a strong benefit from an open air test bed (it is a mobile part) so I suspect that it can reach its 3.4 Ghz four core turbo relatively often. I still would expect slightly better performance if the 2.4 Ghz base clock was raised. The i7 4950 is further handicapped as the L3 cache was cut down to 6 MB and was using SO-DIMM memory with loose timings. The kicker is that this is with legacy code with any AVX2, FMA or TSX benefits.

    I really, really want an unlocked, noncripppled socket 1150 part with GT3e.
  • just4U - Sunday, June 02, 2013 - link

    Well.. your already on a i7.. and tbh I'd say the 2600K is better than the 3770K. .I purchased a 2700K even though the 3770K was out. Why? Well.. lower heat and miniscule gains in the 3X line.. plus when I decide to OC I'll likely get more out of my chip (as will you..) than they get on the IvyBridge processors. Reply
  • Kevin G - Monday, June 03, 2013 - link

    I currently have my i7 2600K running at a conservative 4.2 Ghz. I've gotten it to boot at 4.6 Ghz with ease and probably could go further if felt like increasing voltages and got better cooling (though it is in a 4U server case so cooling options are a bit more limited).

    Price drops are happening for the i7 3770K. Probably need to wait a bit more and might pick one up if they drop further. Microcenter has dropped them to $230 already.
  • chizow - Saturday, June 01, 2013 - link

    Nice review Anand, it's pretty much what I expected from Haswell. 5-15% over IVB with all the bells and whistles of Lynx Point Z87 (6xSATA6G, more USB 3.0 etc.) This will make a nice upgrade for me coming from an OC'd i7-920 and X58 platform, now to see what deals MicroCenter has on the 4770K.

    I would have liked to have seen normalized clockspeed comparisons in the 5-gen Intel round-up but understand this does not reflect real-world results, given SB and above have much better turbo boost and base clocks. I think it would've given a better idea of IPC however, for those who have been overclocking their older platforms to similar max OC levels.

    I also would have liked to have seen more gaming and OC'ing tests but understand this first review needed to cover most of the bases for a general audience, look forward to more testing in the future along with some looks at the Z87 chipset nuances.
  • Concillian - Saturday, June 01, 2013 - link

    So what I'm seeing is 4770 compared with 3770... ~13% more power at load for Hasswell, but less than 10% more performance in the benchmarks? Is that correct? Reply
  • A5 - Saturday, June 01, 2013 - link

    Anand's numbers put it at 13% faster with an 11% power increase. Not sure how you did the math. Reply
  • Concillian - Saturday, June 01, 2013 - link

    You're right in that particular test. +12% power for +13% performance. Still disappointing. Most of the other benchmarks are showing less than 10% improvement, but we don't know the power story. Overall disappointing. With all the talk about power efficiency, I was hoping for +5-10% performance at the same or lower power consumption. All the power benefits seem to be at idle. Reply
  • gipper51 - Saturday, June 01, 2013 - link

    I'm glad I went ahead and built my 3770 system a few months ago instead of holding out for Haswell. Nothing about Haswell was worth waiting for (for my needs). Damn...based on this and Intel's roadmaps I may be on IVB for a looooong time. Reply
  • LordSegan - Saturday, June 01, 2013 - link

    Very weak new chip. Minimal increase in performance unless you are running a render farm or using a crappy ultra book. Useless for desktop gamers. Reply
  • vlvh - Saturday, June 01, 2013 - link

    I'm just wondering what the rationalisation for using a Core 2 Duo for comparison benching is? Surely a Core 2 Quad (eg Q6600) would be a more accurate representation seeing as all the other parts in the benchmark are quad core. Reply
  • WhoBeDaPlaya - Saturday, June 01, 2013 - link

    You'd think Anand would have covered something as important as this.
    I did not see this in _any_ of the reviews.
    Also, the wording on the BCLK overclocking is a little odd. So bottom line - can we OC the 4770 using BCLK or not?
  • Kevin G - Monday, June 03, 2013 - link

    The actual BLCK changes will be pretty much inline with what you'd be able to do on Z68 or Z77, about 110 Mhz max.

    Socket 1150 and Z87 add another bus multiplier to feed the CPU like socket 2011 parts have. So you can have a 100 Mhz clock feeding the PCI-E controller with a 1.25x multiplier a 125 Mhz clock will feed the CPU cores before the CPU multiplier. Increasing the BCLK to 108 Mhz and a 1.25 bus multiplier would equate to a 135 Mhz clock before the CPU multiplier is applied.
  • jmcb - Saturday, June 01, 2013 - link

    All this means is when I finally get my first quad core PC, a 3770k will be cheaper. I see no reason to get this over that. Reply
  • jmcb - Saturday, June 01, 2013 - link

    I currently have an E8400 so the Haswell would be amazing....but I will go for amazing and value. Reply
  • _zenith - Saturday, June 01, 2013 - link

    "Even on the dekstop" - desktop. Just spelling gripe.

    I notice the gains over Nehalem are pretty much the same that you can *easily* get (just increase VCore and BCLK) out of overclocking Nehalem. So now they're neck and neck if you don't overclock the Haswell part. Maybe my system will be good for another year now! (i7-920 @ 4.0).
  • Da W - Saturday, June 01, 2013 - link

    Now i can buy my Trinity + Radeon rig for the price of a haswell only rig. I know it will play every videogame made for xbox or PS4 for the next 7-8 years and make a perfect HTPC!
  • ChefJeff789 - Saturday, June 01, 2013 - link

    I've been sitting on a first-gen Core i7 and this is still a little dissapointing. It is significantly quicker than mine, but I'm not sure that the upgrade is quite worth it for me yet as everything I use runs reasonably well. I am interested in the new socket type, though. I wonder if upgrading the motherboard with a decent Haswell would benefit me later on with a Skylake. It's hard to say; I haven't seen any firm specs on the socket-type for the Skylake chips. Reply
  • jwcalla - Saturday, June 01, 2013 - link

    At this point I think there are more interesting places to put your tech dollars. A high-DPI display or triple monitors if you're into gaming, more SSDs if you need the space, a NAS or a nice tablet, etc. I'm in your camp with an i7-870 and nothing I do pushes it enough to test my patience. Reply
  • kyuu - Saturday, June 01, 2013 - link

    Agreed -- unless you're loaded, there are far more interesting ways to spend your money in the PC arena than on the overhyped and underwhelming Haswell SKUs. Unless you're still stuck on Phenom II or Core2Duo. Reply
  • bji - Sunday, June 02, 2013 - link

    Hey - I'm still "stuck" on a Phenom II (in my desktop; laptop is 15 inch rMBP with Ivy Bridge) and it's not that bad. It's significantly slower than the Bridges in single threaded apps but even so it's fast enough that I never notice, much like jwcalla never notices with his i7-870. And the Phenom II x6 has 6 real cores that do decently well in heavily multithreaded apps, really extensive compiles being the only thing that I ever do that would benefit at all from extra speed. 6 real cores holding their own reasonably well there. Even if an Ivy Bridge were 50% faster per core on the compiles (and it probably is), the x6 has 50% more cores so it kind of evens out ... Reply
  • rabbatabba - Sunday, June 02, 2013 - link

    Anyone know if x264 is properly optimized for AVX2? Naiively, one might expect a 2x speedup for SIMD integer dominated code over the AVX1-only 2700k provided that the rest of the system is able to shuffle around data.

    I am very curious as to the real-world performance gains of integer (and floating-point FMA) SIMD code in Haswell compared to previous generations.
  • Klimax - Sunday, June 02, 2013 - link

    It has support for AVX2. Not sure in how many places, but over months there was stream of AVX2 code paths. Reply
  • plopingo - Sunday, June 02, 2013 - link

    No need for me to upgrade with my I7 960@4.2ghz
    maybe next time :D
  • AmdInside - Sunday, June 02, 2013 - link

    I am actually more interested in the new 8 series chipsets than Haswell. I have a Sandybridge work desktop and Ivybridge gaming desktop and would switch my Sandybridge motherboard if the chipset were backwards compatible with SB CPUs. Reply
  • Klimax - Sunday, June 02, 2013 - link

    Just reminder: We are more or less out of free performance without new algorithms or massive complexity/power consumption. (There are limits to what we can do)

    And Intel engineers won't increase complexity of chips. (Otherwise they'd have their own version of GPU problems)

    Anyway, waiting for Haswell-E... (Could use AVX2, but don't wont to lose 6 cores of 3930k)
  • ashetosvlakas - Sunday, June 02, 2013 - link

    "As we’ve seen in the past, the K-series parts (and now the R-series as well) omit support for vPro, TXT, VT-d, and SIPP from the list."

    As this is the official Haswell review and since TSX are not included in the K-series parts, I believe this is a huge omission from the review, especially since transactional memory is a revolutionary technology with a lot of potential. I find the lack of mention misleading and it should be corrected as soon as possible.
  • KAlmquist - Sunday, June 02, 2013 - link

    Thanks for letting us know this. Checking the specifications on, I find:

    i5-4430 no TSX
    i5-4570 has TSX
    i5-4670 has TSX
    i5-4670K no TSX
    i7-4770 has TSX
    i7-4770K no TSX

    So TSX is missing from half of the current Haswell models (ignoring the low power S and T chips). So far we only have quad core chips; I expect that TSX will be missing from most of the dual core chips. It doesn't seem that TSX is intended to be used by developers writing general purpose code.
  • Kougar - Sunday, June 02, 2013 - link

    Is 2016 the earliest we will see Haswell-E launch?

    Buying a 4770K means I'm wasting 33% of the die on a GPU I don't want, and because it's a "K" chip it will also lose VT-d capability. Six-core "K" chips retain VT-d support, but buying SB-E seems silly given Haswell offers much better IPC and single-threaded performance, and later even against IB-E it would feel like paying more for less.

    Intel has taken a perpetual 2-year break between updating its high end, so if IB-E launches Q4'13 then it seems Q4'15 would be the earliest Haswell-E will show up?
  • Charlesm1950 - Sunday, June 02, 2013 - link

    Haswell GT3e graphics come at what price? Certainly a lot more than Haswell GT2 or Richland A10 graphics. And Intel is right, Haswell GT3e graphics beats old, entry level discrete graphics cards. BUT, Haswell does not do DUAL GRAPHICS! Pair a Haswell with a dGPU and you get only the graphics horsepower of the dGPU. In short, you waste all that die area that Intel devoted to GT3e and eDRAM. On the other hand, pair a AMD Richland A10 with a dGPU and both GPUs work in tandem to give performance scores well beyond Haswell GT3e and for dare I say a lot less money. That's true even with the cheapest entry level Radeon GPU. So, Intel has created a monster graphics engine and put memory on the die - but to claim its free or the best solution is just being a fan boy for Intel. Reply
  • monohouse - Sunday, June 02, 2013 - link

    this my friends, is a joke, the mighty Haswell K models will not support VT-x, LOL this is going to be the first nail in Intel's coffin in favor of AMD, and I bet AMD are going to take advantage of this (and they already take advantage of Intel's "K" marketing strategy - why buy an incredibly expensive Intel "K" model without VT-x support, when you can buy an AMD both unlocked and with VT-x support) (the joke is that a 5-year old Core 2 Duo has more to offer than the latest and greatest Intel parts [it supports VT-x]). I am running a 4300 mhz Wolfdale (Penryn) and an another 3000 mhz Conroe, none of what I see here (and in part due to the lack of VT-x support) from Haswell is of any significant enough value over my current hardware, I would have considered it if I had a Pentium 4, but even then I would rather go to the AMD side, since they present no obstacles.
    and I still have many functional PCI cards and I am not about to throw them away just because Intel decided not to support them (yet another nail in Intel's coffin) (in my opinion hardware has to comply with the user's wishes, not the other way around).
  • skrewler2 - Monday, June 03, 2013 - link

    Wait, what? The K series doesn't include VT-d support, I don't see anything saying they don't support VT-x. Reply
  • BillyONeal - Monday, June 03, 2013 - link

    Your comment doesn't make sense. AMD chips obviously don't have VT-x support; they have a competing standard, AMD-V. The first generation of virtualization extensions didn't make much of a dent in perf. The larger gains came as a result of SLAT which both AMD and Intel added around the Nehalem timeframe. Reply
  • monohouse - Sunday, June 23, 2013 - link

    but SLAT is not going to be available on K models because virtualization is disabled ? Reply
  • Khato - Sunday, June 02, 2013 - link

    Based on the published Haswell 4C GT2 die shot I believe that your estimates for the graphics area are quite high. It's relatively simple to derive the graphics area on 4C GT2 now that we know the total die size - should be somewhere in the vicinity of 58mm^2. Double that and you get 116mm^2 for GT3.

    As for the remaining 29mm^2 delta between 4C GT2 and 4C GT3 die sizes... I'd chalk that up to both inefficiencies due to going with a more square die instead of the long and skinny that's been with us since SNB and the extra logic/IOs necessary for the eDRAM L4.

    Regardless, there's no question that the 174mm^2 figure for GT3 is incorrect as the 4 cores, associated L3, and system agent on the 4C GT2 die take up approximately 119mm^2, and adding 174mm^2 to that would yield a 293mm^2 die size.
  • Bobsama - Sunday, June 02, 2013 - link

    I too am on a Core i7-950. I bought it in April 2012. If this system died and I was building anew, I'd probably do it--I'd probably grab an i7-4770K. But for $500, notta chance! Reply
  • LeetMiniWheat - Sunday, June 02, 2013 - link

    Would have been nice to see some actual CPU-bound games, like Skyrim. Or something low res to take the load completely off the GPU. Reply
  • roltzje - Sunday, June 02, 2013 - link

    It doesn't take a genius to figure whats going on here. Competition spurs innovation. There is no competition for mid-high end desktop processors, or even laptop processors really. AMD only keeps up with mid range because they have to set very high clocks on their inferior architecture.

    Where the competition is is in mobile. Smartphones and tablets. By moving Haswell down this generation, Intel is getting themselves closer to be a true competitor in that space. This year we should see a power and battery life combination that ARM cannot reach.

    There are also huge gains for gaming and gaming laptops. A 14-15" mid-ranger cost $500 with Iris graphics should be able to run most games decently now without all-minimum settings. Especially given that the PS4 and XBO are running x86 low to mid range graphics, I can easily see Haswell notebooks keeping up, especially Iris pro laptops, which I would assume can come in under $1000 and in relatively slim form factors.
  • deepblue08 - Sunday, June 02, 2013 - link

    I think they should benchmarks this new CPU with 2-3GPUs. I don't think 1 video card is enough to really test the strength of new cpus, which is why the difference between ivy bridge and haswell is so small. Give them 2-3 GPUs to work with and see if they can really step-up. Reply
  • Yahma - Sunday, June 02, 2013 - link

    YAWN... with no competition from AMD, Haswell offers none to just a few % improvement over the previous generation (depending on who's benchmarks you believe), higher power consumption, and fewer features (ie. no virtualization extensions on the higher end models) all at a higher price that requires one to purchase a new motherboard.... No Thanks! Reply
  • just4U - Sunday, June 02, 2013 - link

    That was covered in a previous article.. it was established that the 3770K was the go to cpu for multiple video card systems. (altho.. I'd say this would do in a pinch as would the previous 2600/2700K) Reply
  • Yahma - Sunday, June 02, 2013 - link

    YAWN... with no competition from AMD, Haswell offers none to just a few % improvement over the previous generation (depending on who's benchmarks you believe), higher power consumption, and fewer features (ie. no virtualization extensions on the higher end models) all at a higher price that requires one to purchase a new motherboard.... No Thanks! Reply
  • mcbowler - Sunday, June 02, 2013 - link

    thanks for posting "CPU Performance: Going Even Further Back" almost twice as fast as my i5 750. I just might upgrade when the NCASE M1 is available. Reply
  • araczynski - Sunday, June 02, 2013 - link

    either way, should be a serious boost in power to my aging E8500 :) Reply
  • winterspan - Monday, June 03, 2013 - link

    So, can someone catch me up with why Haswell isn't being compared to the older Core i7-39xx Sandy Bridge chips with 6 cores (Sandy Bridge E series)? Is it because they are based on the Xeon architecture and thus are not directly comparable? Will we see an Haswell-E (or Ivy Bridge E) series-based Core i7 with more than 4 cores as a follow-up to the Core i7-39xx? Reply
  • TomWomack - Monday, June 03, 2013 - link

    Yes, Ivy Bridge E should appear in the fall; it's not at present quite clear how many cores it will have, possibly only six with the 8- and 12-core units reserved for sale as Xeons.

    Asking to compare a 4770K and a 3930X, Haswell wins on single-thread and loses on some multi-thread tests, which is what you'd expect.
  • Kevin G - Monday, June 03, 2013 - link

    Probably because it'd be embarrassed in some cases. For lightly threaded workloads, the i7 4770k would come out on top. The six core i7 39xx chips need heavily threaded applications to really shine. Also of note is that GT3e versions of Haswell have 128 MB of L4 cache which further improves IPC. A hypothetical 3.5 Ghz fully functional Haswell with GT3e and recompiled software will likely out run an 8 core socket 2011 Xeon. Reply
  • boe - Monday, June 03, 2013 - link

    Pretty much a big steaming pile of meh.

    They should have at least put more PCIe lanes on the thing otherwise it is just last years processor by another name.
  • milkod2001 - Monday, June 03, 2013 - link

    What revision of i7-4770K was tested here? There's been rumors about early Haswell revisions giving some problems to usb3 connected devices. When in sleep mode it can't wake up after that:

    Would love to see some additional tests covering this problems to make sure I won't spend weeks sending faulty CPU back, waiting and all that s..t.

  • Diogenes5 - Monday, June 03, 2013 - link

    TIL that my 2-year-old i2500k still rocks it for the price and the overclock I have it at (beats any current gen processor for most tasks). Reply
  • Oscarcharliezulu - Monday, June 03, 2013 - link

    Thx for the review Anand, as always very thoughtful and well written. You always have a good mix of objective and intelligent subjective. Looking forward to Ivy-E now. Reply
  • psuedonymous - Monday, June 03, 2013 - link

    Dangit Anand! Why are you still using 2-pass encoding? Everyone and their dog have switched to the faster and more effective CRF. It may have some obscure use as a synthetic benchmark, but it's certainly not a real world one! Reply
  • skrewler2 - Monday, June 03, 2013 - link

    Pretty annoying they're still not including VT-d support in their K series. Reply
  • SanX - Monday, June 03, 2013 - link

    Anand gets on my nerves lately. Smells like shill articles everything he personally covers. Reply
  • random2 - Monday, June 03, 2013 - link

    Other reviewers are getting the more standard 10-15% difference on average across the board typically seen with Intel and a new gen. introduction over their previous gen. If it's not being seen here, maybe there are benches not included that should be. Reply
  • Achaios - Tuesday, June 04, 2013 - link

    1.72 FPS difference between an i7-2600k and an i7-4770k. This is so hopelessly pathetic.

    My QX9650 and my Asus P5Q Deluxe just got a 2 year life extension. I wonder when it's going to be worth it to finally ditch my 2008 system!!!
  • Arbie - Wednesday, June 12, 2013 - link

    The CPU does not control the FPS you're talking about. Beyond a certain point of CPU power (long ago passed) FPS depends on the external graphics card. A Haswell i7-4770K CPU will, according to the Anandtech benchmarks, be about twice as powerful as your QX9650. At 1/3 the cost, and far less idle power. That's not "pathetic".

    You may very well not want to upgrade, because you won't see a difference in your games - but that isn't Intel's fault. In fact it's a problem for them and the desktop market in general. Gains in CPU power are becoming irrelevant. As are PC games, BTW, which is why graphics cards are improving so slowly now. In two more years... it will be the same story, except more so.
  • monohouse - Sunday, June 23, 2013 - link

    +1, I upgraded my system:
    from a E6750 (3800 Mhz, 4MB) to E8400 (4300 Mhz, 6MB)
    from GTX 480 (840 Mhz, 1.5 GB) to GTX 580 (930 Mhz, 3 GB)
    and got the same FPS in crysis 2 (with MaLDo HD + hi-res textures + DX11)
    shows that in some cases, there is no improvement
  • Gastoncapo - Wednesday, June 05, 2013 - link

    honestly, i'm still running my 2500k @5ghz ...i don't see the point in upgrading yet, guess i'll just switch my 580 gtx for a 780 :D Reply
  • Nillerboy - Sunday, June 09, 2013 - link

    will there be made ​​a review of Intel 4770S?

    I am considering making a lab with this cpu, and would like to see proformence difference between the Intel 4770 and 4770S.
  • C433 - Wednesday, June 12, 2013 - link

    second this, best would be to also include 4670S and at least one of the T models Reply
  • ThomasS31 - Monday, June 10, 2013 - link


    What is intels response to the overheating/throttling issues with stock cooler haswell (not OC).
    Can you investigate this issue more (as you can as ususal).
    Seems there are some manufacturing quality issues with the heat spreader and thermal paste.
  • Arbie - Wednesday, June 12, 2013 - link

    Just some info for others considering a Haswell desktop build.

    I have just assembled an i5-4670K on an Asus Gryphon Z87 mATX mobo, with 8GB of Corsair Vengenace RAM (1333MHz) and a GTX 650ti graphics board. This is with a 250GB Samsung SSD and a old 500GB mechanical drive. The PSU is a Corsair CX600 80-Plus.

    ==> At stock clocks, idling on the Windows desktop, this PC pulls only 39W from the wall!

    I am amazed. Idle power is important to me since the box is in a poorly-ventilated room. Upping the AC to improve that area would freeze the rest of the house AND cost a bundle. The time periods spent at full load are negligible in comparison. But the performance is there when needed... this CPU is nearly twice as powerful as my Yorkfield build, which idles at 170W, with HD 5770 gfx and an SSD.

    So I'm very pleased with Haswell on the desktop. And yes, I could have bought a lower level chip and mobo for this use, saving money and a few more Watts, but I wanted the good stuff and the PC may someday be moved to where an OC would be practical.
  • emperius - Friday, June 14, 2013 - link

    Well I certainly can wait for AMD to truly boost their competition from Sony and Microsoft's budget here. Let's face it, multi-year contract for console apu's should economically boost them massively. Reply
  • eckre - Friday, June 14, 2013 - link

    VERY disappointing CPU, if you disregard ALL of the fake synthetic benchmarks, and use only the real world ones, the gains over the 3770K are 0.03%, max 8.8% and average at about a 3% Improvement. 3%? Not worth it. Motherboard? Z87 was suppose to drop PCI but all the motherboards still have it and nothing outrageous. So basically lower power consumption, better on board video performance and no discernible improvement over the 3770K. Reply
  • i7Ahmed920D0 - Monday, June 24, 2013 - link

    GOOD BYE 920 D0 Ill never forget you ! Your going to moms rig ! Helllllooooo 4670k with upto 40% better performance for 225 bucks ! Reply
  • Yangorang - Thursday, June 27, 2013 - link

    There are some rumors that the Haswell desktop processors run fairly hot - can you confirm this at all in your testing? Reply
  • calico-uk - Friday, July 12, 2013 - link

    Face it, A consumers market, not a true imagine of technology evolution. Low power with high temps? Just doesn't ring true, unless the chip has been purposely ' Restricted ' with only one cause in mind sales and protection of future sales. In layman's terms Greedy bastards more concerned with markets and money and not the Desktop guy who's impressed with performance. Blame Mac, Windows 8. And your tiny fucking phone. Reply
  • James McGrath - Sunday, August 11, 2013 - link

    This test is a little bias against the Core 2 duo I think. What they have done is essentially pitch the very top end on Intel's last 4 generations against a mid to low end core 2 duo. It would be more fair if that generation was represented by a qx9770 or even a Q9650. Reply
  • clyman - Wednesday, October 23, 2013 - link

    I have been involved in computer hardware, software and programming since 1986. There has never been a more reliable processor than Intel, at any time. The current ones still have to be tested over time, but I would not buy one. If you put enough case fans with the inferior processors, they might last 2 or 3 years. With an Intel processor, even overclocked with no case fans, provided you used the intel chipset, nearly 100% of them kept running for 5 or more years. Without overclocking, I still don't know when they will die, since none have to my knowledge. I have built at least 100 computers and have only built a couple non intel ones. It was obvious right away how inferior they were to intel. Also, I love my i7 haswell on an asus z87-pro motherboard. There is no need to overclock, the processor and memory score 7.8 out of 7.9 on the windows scale and the onboard graphics scores a 6.8. Not bad ratings. Also, the CPU only uses 7 to 8 watts of power most of the time. Reply
  • numskulll - Tuesday, November 05, 2013 - link

    I'm quite happy with my undervolted Athlon 620 and Radeon 4200 graphics. More computing power than I need 99% of the time, estimated tdp of about 55-60 watts (1% of use) and system idle of about 40w (80+% of use). And if I ever play games it takes a few seconds to bung in a suitable graphics card. I doubt it will go belly-up in the foreseeable future, but if it does it only cost £50 second-hand about 3 years ago and would be even less to buy now. Reply
  • duttasanjiv - Friday, November 14, 2014 - link

    I am planning to get a new i5 4690 system for 1080p transcoding. I am not interested in GPU assisted transcoding. My focus is on QuickSync assisted transcoding. It is said that latest Quicksync is almost comparable to CPU based transcoding.

    It is annoying to see all sites compare only obsolate systems with a vague spec. (including AnandTech). Then, they omit any screenshots. Some are so weird thay put the video at youtube!! How can one compare the quality after youtube again transcoded it?

    So far I am looking for a scientific, specific comparison on -

    1) i5 4690 or atleast 4570 with CPU only & with QS

    2) Screenshots in PNG
    3) Specifying what settings were used? What QRF / what QCF? What speed presets - 'slow' or medium? What profile - high or main?. 'faster' or 'very fast' settings speed presets is worthless.

    2) What is the source format - 1080i or 1080p? How long? What was the target size.
    3) What QuickSync performance settings were used? I heard that haswell supports 7 performance vs quality settings. Also never found if QS supports any other parameters.

    Merely mentioning FPS or time won't help.

    I request you to pl. provide us with such an comprehensive comparison with will help many users like me, to settle all doubts for good. Thx so much in advance...​
  • minitt - Sunday, December 07, 2014 - link

    They dropped I5 750 and I7 920 from the gaming benchmark because both the processor will put up decent FPS which will take away most of the lime light from the haswell. A 4.0 ghz clocked i5 750 or i7 920 is still capable of keeping the modern GPUs running at 100% .I would be more than happy if someone can prove me wrong. Reply
  • Miller1331 - Tuesday, December 01, 2015 - link

    I have the 4770k in my Desktop which is used primarily for high end music production and it eats up everything I have thrown at it thus far Reply

Log in

Don't have an account? Sign up now