POST A COMMENT

125 Comments

Back to Article

  • ilkhan - Sunday, May 11, 2014 - link

    With 9-series, Intel is enabling Rapid Storage Technology 13, allowing UEFI support, RADI 0/1/5/10
    Think you meant RAID.
    Reply
  • Ian Cutress - Sunday, May 11, 2014 - link

    Interesting, I remember correcting that a couple of days ago in an edit. Might not have saved. Updated :) Reply
  • ahar - Sunday, May 11, 2014 - link

    I await your correction in the conclusion with bated breath. ;) Reply
  • Bugfree - Sunday, May 11, 2014 - link

    Reading the conclusions I agree that Intel is somehow underperforming, since...well...they can. No real challengers or competition from AMD at this point. I really hope this changes soon... Reply
  • schizoide - Sunday, May 11, 2014 - link

    Exactly. Intel doesn't _need_ to release anything. They essentially have no competitors at the high-end space. AMD's CPUs can't even compete with the i3. Reply
  • nandnandnand - Sunday, May 11, 2014 - link

    It would take a miracle to inject competition into the CPU market: http://www.eetimes.com/document.asp?doc_id=1322247 Reply
  • Jaaap - Monday, May 12, 2014 - link

    We need that miracle.
    This "Haswell Refresh" is total nonsense. It is just shuffling a bit with names and frequencies.

    Where is the looooong overdue desktop quadcore with Iris Pro?
    Reply
  • shorne21592159 - Thursday, May 15, 2014 - link

    i have just changed to using amd apus after many years using intel chips .im really enjoying amd and what they are doing at the moment with lot of hope for the future and although not as fast as intels at moment they do the job at the moment well for me ,plus i seem to be able to overclock amd chips much more than an intel with less problems for some reason Reply
  • etamin - Sunday, May 11, 2014 - link

    I feel that a whole lot of unnecessary effort was put into the benchmarks. But we appreciate the effort of course.

    I'm looking forward to SATA Express. Are there any compatible consumer level M.2 SSDs currently available?
    Reply
  • weilin - Monday, May 12, 2014 - link

    Yup, look for the models below in M.2 interface

    Intel: 530
    Crucial: M500, M550
    Misc: MyDigitalSSD, Samsung (only second hand/OEM stuff, no retail presence)
    Reply
  • jjj - Sunday, May 11, 2014 - link

    But there is competition, more than they ever had. You got tablets and phones killing PC sales and Intel is just sabotaging the only PC segment able to create any hype around PC. There is nobody else that really gives a damn about their products but hey they would rather have 60% margins instead of 58% even if they lose us too.
    Instead of giving us a 300$ 8 cores chip with no GPU, they are giving away free tablet Atom because that will help.....
    Reply
  • juhatus - Sunday, May 11, 2014 - link

    I totally agree, Intel should really push PC again, not replace schim's with better ones. Reply
  • bji - Sunday, May 11, 2014 - link

    Intel cannot generate sufficient returns on its R & D dollars anymore. Nobody in the consumer market cares that much about faster Intel chips except CPU enthusiasts, and it's not a market that can even come close to supplying enough money to fund the R & D necessary to significantly advance x86 performance.

    You can blame AMD for not supplying enough competition, but a bigger part of the equation is that faster Intel chips don't sell "enough more" than current generation ones to justify spending huge amounts of R & D money on them.

    You can look at this as a sad fact, but I actually like it. I can by a laptop now for cheap that is so overly spec'd for my needs that I essentially never need to upgrade, saving me money. Software companies cannot depend upon ever increasing chip speeds, they have to become better at producing efficient software in order to have the headroom for new features that used to be provided by ever increasing chip speed.

    Advances in mobile computing are where it's at, and everyone knows that already ...
    Reply
  • Antronman - Monday, May 12, 2014 - link

    Buddy, you've never built a PC, and I'm assuming that you don't have a hardware-heavy job. Reply
  • bji - Monday, May 12, 2014 - link

    Wrong on both counts. Reply
  • Shadowself - Monday, May 12, 2014 - link

    To some extent you're correct, but the reality is that the reasons are interdependent.

    Current performance increases from one generation to the next (Sandy Bridge to Ivy Bridge to Haswell and anticipated into Broadwell) are only 10% - 20% per generation. I, and many like me, buy at the top of the performance range and now use that machine for a few years so that we get a significant performance jump with each new machine purchased. Back 20+ years ago when each generation was 50% to 100% faster I upgraded each generation. It's just not worth it to do so anymore for just a 10% - 20% increase. The aggregate cost of purchasing machines every generation over a few years for such moderate increases in performance is too high.

    Likely Intel (and AMD) are into the range of diminishing returns on performance increases. Unless there are huge architecture changes in the upcoming Skylake series I expect the 10% - 20% increase per generation to continue for at least three more generations -- Haswell to Broadwell, Broadwell to Skylake, and Skylake to Cannonlake -- (and maybe go to 5% -10% performance increase in the Skylake to Cannonlake transition).

    Small increases in performance keep people from buying every generation. People not buying every generation limits profits and what can be put into IR&D. Limited IR&D produces small increases in performance. And the cycle repeats.
    Reply
  • alacard - Monday, May 12, 2014 - link

    The increases over the past few generations have been no where near 10-20 percent.

    Sandy to Ivy was ~5 with that increase coming solely from power management/boost tweak. Clock for clock the cores perform identical (to get this metric you simply disable all the power management/boost functions in the bios and benchmark from there) with Ivy offering a slight decrease in consumed wattage.

    The Ive to Haswell performance increase - with a few exceptions - is < 10% in almost all tests, with consumed wattage actually increasing from ivy.

    Unfortunately, the Intel performance curve over the past 4 years is actually far more compressed than you paint in your post. With the exception of a couple of new task specific instructions we've had essentially less than 10% performance gains during that time
    Reply
  • BadThad - Monday, May 12, 2014 - link

    I agree Shadowself! I'm actually still mainly on a Q6600 system as it meets all of my needs. At last I'm at the point where I'm going to Z97 as the performance benefits and feature sets are finally worth it for my needs. Reply
  • rajod1 - Thursday, May 29, 2014 - link

    I agree, waste of cash to upgrade every generation. Old days were nice. Intel said they would hit 10 GHZ by like 2004 or something. LOL. So they hit a wall but still needed cash. Lots of suckers born every minute that will upgrade for 10 percent increase. Reply
  • Hrel - Monday, May 12, 2014 - link

    The fact that hyperthreading is disabled on the i5's on the desktop is infuriating. Enabling it costs them nothing. Agree on the 8 core at $300, it should still have HT though. Reply
  • Flunk - Monday, May 12, 2014 - link

    Definitely, turn on HT on i5 and give them all 4 cores and offer an 8 core i7 with HT. i3s can soldier on with only 2 cores if they really want to.

    A i7 5770 with 8 cores and HT is what they really need to bring out, but I think they're waiting for AMD to bring out something better first.
    Reply
  • rajod1 - Friday, May 30, 2014 - link

    You may as well ask them to drop the I5 because if you put HT on a I5 its then I7. Reply
  • rajod1 - Friday, May 30, 2014 - link

    It does cost them something. They sell my I7s that way. Reply
  • bsim500 - Sunday, May 11, 2014 - link

    Thanks for this review - appreciate the effort that went into it as always.

    "Also of note is the Z97 motherboard we used for these tests implements an Adaptive voltage profile, meaning that artificial loads such as OCCT push the voltage higher than normal, increasing power consumption at load"

    So they've still got that dumb Haswell "feature" of stuffing the Vcore up by +0.1v when you least want it? Also why are your power consumption figures so high in general? My i5-3570 @ 4.0Ghz barely pulls 37w idle / 88w 4-threads prime with a 7870 discrete card (whole system (excluding monitor) measured at the wall). That's roughly 25w lower both idle & load than your i3 (with a 1600MHz idle vs 800Mhz Haswell's)! Just out of curiosity, what where the stock / default VID's like on the i3-4360 / i5-4690? ie, has the clock speed bump increased required voltages much? Thanks.
    Reply
  • Ian Cutress - Sunday, May 11, 2014 - link

    Adaptive is an Intel specification implementation, but for some reason goes haywire with certain 100% load simulators (OCCT/AIDA).

    With regards power consumption seeming high:
    a) Low efficiency band of the PSU. Hence me stating qualitative analysis more relevant than quantitative. Need to keep the PSU consistent across all the tests, some tests require 2x/3x GPUs (e.g. X79). This is probably a large part of it, but all tests are therefore done on the same efficiency curve.
    b) Using a Corsair H80i with two fans and ODD plugged in. I move the USB devices to USB 2.0 so any USB 3.0 controller can power down, but it still all adds up.
    c) OCCT loading does the adaptive voltage thing, causing more power consumption at load from idle.

    We got ES chips to test, so retail might have adjusted slightly on the stock VID. Also VID can differ from chip to chip in the same bin, so it's not really a good measure. One CPU can have a high VID in the bin, while the next bin up we could get a low VID, and it all look a bit odd.
    Reply
  • Daniel Egger - Sunday, May 11, 2014 - link

    IMNSHO it is really ridiculous to test all systems with the same outlandish special PSUs that no sane person would ever use. Why not have a testbed for single card systems with say a platinum 500W PSU (which should cover even the nasty R295X2 plus a Haswell K processor) and a separate one for crazy setups? With those far sub 20% loads even under full load it is nearly impossible to get useful readings not to mention comparable ones since at these low loads lots of funny effects kick in skewing the results... Reply
  • wetwareinterface - Sunday, May 11, 2014 - link

    The reason you don't have a low end psu on your test bench is it's a test bench.
    The one setup should handle anything you can possibly throw at it and then a little extra for good measure.

    Also having the exact same high wattage psu to test everything on eliminates the psu as a differentiating factor when testing multiple system configurations.

    and finally the sad truth is there are several reviewers working for anandtech each from home and each with whatever they have laying around to do said testing with...
    Reply
  • Daniel Egger - Monday, May 12, 2014 - link

    > The reason you don't have a low end psu on your test bench is it's a test bench.
    The one setup should handle anything you can possibly throw at it and then a little extra for good measure.

    Exactly my point for suggesting a 500W PSU rather than something much lower that I would personally put into a build. That should be sufficient for any even just halfway reasonable setup.

    > Also having the exact same high wattage psu to test everything on eliminates the psu as a differentiating factor when testing multiple system configurations.

    Unfortunately that's not true. Very low output on high wattage PSUs skews the results quite a bit because they typically are not accurate enough when it comes to handling the low loads thus smearing over the results with their own losses. I assume this is also why our Greek friend here doesn't even bother to test loads below 5% (which would be 60W at a 1200W PSU, about twice as much as my current Haswell PC needs on an mostly Idle Windows desktop).
    Reply
  • roxamis - Monday, May 12, 2014 - link

    Take the measurement as qualitative not quantitive.
    Power consumption at the wall doesn't mean much as a number. Its wrong on many levels, only one of which is the PSU efficiency. It doesn't really matter what % is the load, you measure one thing and you try to deduce something else
    Wall plug wattmeters can't measure PSUs correclty anyway. Take these numbers with a grain of salt.
    Reply
  • MrSpadge - Monday, May 12, 2014 - link

    The way it is done now fold the power efficiency curve of the PSU (which is rather steep at low loads) with the actual differences between components (which is what we actually want to measure). Noting that the method in general is not very precise doesn't change this.

    And since power consumption is not plotted for the multi-GPU setups (and doesn't need to, for such an article) it's not necessary to keep the PSU similar across all configurations. I'd rather have a flat PSU curve for all CPUs - this way I could better judge the differences between them.
    Reply
  • rajod1 - Friday, May 30, 2014 - link

    Wall power readings do mean something to some people just not you. Reply
  • wpcoe - Sunday, May 11, 2014 - link

    The chart comparing the three new Haswell Refresh CPUs with the three previous counterparts shows the i3-4330 as 3500 base frequency, but all the benchmarks following a bit later show the i3-4330 as 3.6Ghz. The ark.intel.com page shows the i3-4330 to be 3.5Ghz, so I think maybe the benchmarks have an incorrect speed? Reply
  • Ian Cutress - Sunday, May 11, 2014 - link

    It was just those power test graphs that had it mistyped, my bad. Review updated, 3.5 GHz is the correct frequency. Reply
  • The0ne - Sunday, May 11, 2014 - link

    The separation of desktop and workstation hasn't changed. It is technology that has changed and allowed for smaller, faster, quieter, more energy efficient hardware to be use. I felt it was unnecessary to give labels to something such as this. Human-Limiting? Reply
  • bji - Sunday, May 11, 2014 - link

    "Human limited" vs "CPU limited" is a much, much clearer way of saying what the author is trying to say than is an artificial definition of "workstation" vs. "desktop" that you are proposing. Reply
  • Flunk - Monday, May 12, 2014 - link

    Splitting workstation and desktop has always been about fleecing more money out of people for top end systems. There really isn't a practical difference. Reply
  • Smile286 - Sunday, May 11, 2014 - link

    What about temperatures? Are those Haswell Refresh processors less hotter or not? Reply
  • Laststop311 - Sunday, May 11, 2014 - link

    Well this was a pointless waste of time. We already know how Haswell performs. Should of just copy pasted the specs and linked back to previous haswell reviews. The only interesting thing, the devil's canyon with better TIM, was the only thing not covered.

    What people want to see is a bunch of DC OC attempts from various sites to see the average max oc and max temp it has and compare that to the 4770k. Everything else in this refresh is meaningless as we have already seen haswell benchmarked. Grats wasting your time, at least u got paid.
    Reply
  • willis936 - Sunday, May 11, 2014 - link

    Man why are you reading anantech if you think redundant data collection is a waste of time? Verification of both results and of expectations in new products is valuable. Intel could call it a refresh after botching every wafer in the past six months and dump it into a new product line. Reply
  • DanNeely - Sunday, May 11, 2014 - link

    This is the locked, non OC, portion of the Haswell refresh. The new unlocked chips have a rumored ETA of early next month.

    If you're not interested; go read something else instead. What you want isn't available yet.
    Reply
  • Flunk - Monday, May 12, 2014 - link

    Proving there is no credible difference is useful, maybe it will save some people a few bucks. Reply
  • nutjob2 - Sunday, May 11, 2014 - link

    Intel is chasing a dying market, ie, those who are willing to pay for single threaded performance at any cost, and worry less about power consumption, in the desktop and server space.

    These people fall into two broad categories:

    First there are gamers with more money than sense who spend hundreds or thousands each year on the latest CPU/motherboard/GPU that delivers a 5%-10% increase in framerate. Also other people who feel they need the latest and greatest for whatever reason.

    On these people Intel dumps their hottest parts since they're not very concerned about power consumption, water cooling is a badge of honor for these guys.

    Then there are the corporates who are visited by HP/Dell/etc salesmen and are told they need big, big iron so they can virtualise all their servers to "save money". That of course means they've created a single point of failure for all those servers so they need a machine with redundant everything and huge density. No-one has the heart to tell them they still have a single point of failure.

    These people don't so much care about power consumption than the fact you can't cool or deliver power very well to multiple processors in a box, so Intel give them their somewhat cooler parts. Because most of the software they run is not very clever and largely single threaded they stick to Intel.

    The smart corporate money is not buying any of Intel's overpriced CPUs instead they're sticking with their existing hardware and waiting for "cloud" providers to lower their prices. Why are they doing that? Because Intel is playing most of the above market for suckers, making them pay through the nose while they sell their best parts to Amazon, Google, Microsoft, IBM et all who pay a fraction of what everyone pays. Intel doesn't do this out of the kindness of their hearts but because they know that cloud providers will eventually replace many of their existing direct customers and because they don't care about single threaded performance (they're almost always using all their cores). Intel are milking their cash cows while they kiss up to the new big players, lest they start getting too friendly with AMD or ARM vendors who can make custom parts for them, just like the big players make their own motherboards, etc shutting out people like HP and Dell.
    Reply
  • willis936 - Sunday, May 11, 2014 - link

    Adding an extra 20 to product numbers is "chasing"? Reply
  • betam4x - Sunday, May 11, 2014 - link

    Regarding your comments:

    1) Gamers don't spend 'hundreds of thousands of dollars every year' on PC components. Top of the line PC hardware can be had for under 3 grand. Typically the users who buy this years new hardware are the ones that skipped the past 1-2 generations of hardware.

    2) Intel's parts aren't 'hot'. They are more power efficient than they've ever been. The performance per watt is among the highest in the industry.

    3) Virtualizing does save money. Even if that server went up in flames, a backup of that VM is stored elsewhere so that it can be quickly brought back up on the backup server located in another rack. This means minutes of downtime instead of hours.

    4) At my company we buy intel CPUs every day. We upgrade all of our machines every 3 years and often have to buy new machines for new employees. Of course, we do other 'crazy' things as well, like dual monitors, ample workspace, etc. Imagine that?
    Reply
  • Gigaplex - Monday, May 12, 2014 - link

    "Gamers don't spend 'hundreds of thousands of dollars every year' on PC components."

    They said hundreds OR thousands, not "of".
    Reply
  • rajod1 - Friday, May 30, 2014 - link

    Gamers as a group spend millions on upgrades not hundreds or thousands. Reply
  • maximumGPU - Tuesday, May 13, 2014 - link

    Upgrade every 3 years, and new machines for new employees? I'd like to apply to a job in your company, tired of XP and pentium 4's in my current one. Reply
  • YuLeven - Wednesday, May 14, 2014 - link

    Know that feel. I use a Celeron D here. Reply
  • rajod1 - Friday, May 30, 2014 - link

    Well said, yes I agree many gamers or OCing people have sort of a sickness. They are addicted to hardware. They can't help it, like a moth to a flame. They will deny it but its true. Reply
  • klmccaughey - Sunday, May 11, 2014 - link

    Ian, I am still confused by your description of SATA express/M.2/PCIe based SATA. Any chance of an article or comment on this? I am lost as to what to buy as I don't know now what fits what. I thought SATA express was PCIe SATA? I don't want to buy the wrong board or SSD. Reply
  • Galatian - Sunday, May 11, 2014 - link

    M.2 is simply just a different connector type of SATA Express designed for Notebooks and SFF builts. Intel only states M.2 support on the 97 chipset, but it looks like most motherboard manufactures are releasing boards with SATA Express connectors as well.
    When you shop for a new SSD (and there is really isn't anything on the consumer market right now) you need to look at what your motherboard has: does it have an M.2 connector, SATA Express or both? Then buy accordingly, while minding that M.2 SSDs come in different physical sizes and you need to check if the fit into the slot. Drives using the "normal" SATA Express connector will probably come in the standard 2,5" format.

    Also note that - as Ian stated - the current M.2 and SATA Express implementation is already pretty much outdated since all (except ASRock on the Extreme 6 series) motherboards only expose 2 PCIe 2.0 lanes, which limits the bandwidth to 10 Gbit/s. The Samsung XD941 M.2 for example is already bandwidth constrained on that setup.

    On that note you might also want to check how the SSD is talked to. For example the Samsung XD941 still uses the AHCI protocol. SATA Express is capable of NVMe and there is no consumer SSD out yet with support for that.
    Reply
  • Laststop311 - Monday, May 12, 2014 - link

    It's going to be awhile before we reach the limit. The fastest sata express implementation is pcie 3.0 x4 giving almost 32gbit/sec bandwidth + NVMe needs implemented. Then it will be maxed until pci-e 4.0 is out and it's updated for that. And the cycle continues infinitely.... Reply
  • klmccaughey - Monday, May 12, 2014 - link

    Fuck it - that's far too complicated. I am just going to wait until broadwell when they can agree on a standard. This is like betamax all over again. Reply
  • kwrzesien - Monday, May 12, 2014 - link

    +1, I think you've nailed it on the head. I don't every see myself buying or recommending a SATA Express drive - the connector is dog. M.2 has potential but can't they even come up with a size standard? I mean hello, something like t-shirts? S, M, L? The things could be any length an who knows what each board will support. Reply
  • kwrzesien - Thursday, May 15, 2014 - link

    Further research down the rabbit hole: http://en.wikipedia.org/wiki/M.2

    After learning all about M.2 and seeing what is listed on NewEgg for the drives and motherboards I think we are in for a rough ride, nobody is listing all the the features that you need to decide whether two parts are compatible:

    1) There are different keys in the connectors in both the cards and slot (B, M primarily but A through M are options). Both B and M support SATA, while B is also used for PCIe x2 and M is used for PCIe x4.

    2. The cards are different lengths but these are standardized into 16, 26, 30, 38, 42, 60, 80 and 110 mm possibilities, with current Z97 desktop motherboards commonly supporting 42, 60 and 80. 80 seems to be the best option for fast and large SSDs, 42 will be mostly for tablets and ultrabooks.

    3. There are different width standards 12, 16, 22 and 30 mm but so far everything seems to be 22 mm.

    You would then call a 22x80 card a 2280.

    4. There are different thickness standards, either single-sided or double-sided of 1.20, 1.35 or 1.50 mm. The connector on the board must support the proper thickness and SS or DS. Codes are S1-3 and D1-5 for single and double-sided at different thicknesses.

    A 22x80 double-sided 1.35 thick card with an M key would be called 2280-D2-M.

    5. SSD cards can use the SATA interface or SATA Express/NVMe (direct PCIe connection, less overhead). Currently all cards (Intel 530, Micron 550) seem to be SATA. Performance should be similar to SATA 6Gbps 2.5" drives so this form factor is really about convenience at this point not performance.

    6. On the specs page at Newegg for the ASUS Maximus VII Gene Z97 (http://www.newegg.com/Product/Product.aspx?Item=N8... board you get this for M.2: 1 x M.2 Socket 3 with M Key. Looking at the physical board layout it supports 2260 and 2280 size cards and includes one fastening screw that can be moved to either position. I don't know how the "Socket 3" translates into single-side and double-side thickness codes.

    7. On the specs page for the Micron M550 512GB M.2 SSD (http://www.newegg.com/Product/Product.aspx?Item=N8... you get Form Factor: M.2 Type 2280 and Interface: SATA 6Gb/s.

    Now it is possible that the motherboard only supports SATA Express/NVMe devices on the M.2 slot while the Micron and other available SSD's are only SATA and may not work. Since the English version of the motherboard manual is not available yet it is hard to say.
    Reply
  • Daniel Egger - Sunday, May 11, 2014 - link

    Are you sure the Core i5-4460S doesn't have Turbo? Because all others and the predecessor do. Unfortunately Intel ARK doesn't list that one yet. Reply
  • bharatwd - Sunday, May 11, 2014 - link

    Guys, when do you think the 4690K will release...Im planning to upgrade but i can wait a quarter as compared to the purchase of the 4690 Reply
  • DanNeely - Sunday, May 11, 2014 - link

    The most popular date in the internet rumor mill is Jun 2.... Reply
  • Basilisk - Sunday, May 11, 2014 - link

    While I enjoy the image of you chatting away with worms a'dangling, The Bard has his due:
    "Shall I bend low and in a bondman's key, With BATED breath and whispering humbleness, Say this..." [Shylock]. Billy originated the phrase, and his spelling is still considered correct... although it's a common enough error. Think of it as abated (restrained) breathing.

    Informative article, however, so Thanks!
    Reply
  • dwbogardus - Monday, May 12, 2014 - link

    Really? I thought it meant breath that smelled like bait! (just kidding) Reply
  • coburn_c - Sunday, May 11, 2014 - link

    I swear I saw a 'Death of the Desktop' and a 'PC gamers outnumber next-gen consoles' article in the same news feed last week. Reply
  • dave_the_nerd - Sunday, May 11, 2014 - link

    So this is what you were working on?!?!

    10 days with no updates; you had me worried AnandTech. Thanks for the kickbutt review, as always.
    Reply
  • davegraham - Sunday, May 11, 2014 - link

    lol. in your conclusion, you say "baited breath" which would lead someone to believe that you smell of fish. what you meant to say is "bated breath" which approximates to your intent. :) great article overall Ian. Reply
  • IndyJaws - Monday, May 12, 2014 - link

    Merchant of Venice +1 Reply
  • FreeMan4096 - Sunday, May 11, 2014 - link

    Impressive article. Bookmarking the webpage. Reply
  • Galatian - Sunday, May 11, 2014 - link

    Ian: on page 12 dual 7970 testing in Bioshock Infinite the graph for minimum frame rates has the core i3 twice with two completely different values.

    Thanks anyway for the great article!
    Reply
  • extide - Sunday, May 11, 2014 - link

    "Eventually as the future of the chipset progresses, I see all these ports becoming flexible, though I would imagine we are a few years out from this." -- I actually don't think that will happen. It undoubtedly will increase the complexity of the chipset to allow the possibility for every single port to be configurable, thus increase costs. However, it is very unlikely that you would have a situation where you wouldn't want any SATA ports, or USB3 ports. Thus it seems pretty pointless to make ALL of the ports configurable. I would imagine that we will see a future with MORE configurability, but I would bet you will always see some dedicated ports. Reply
  • mapesdhs - Sunday, May 11, 2014 - link


    If anyone's curious as to how the article's results compare to an oc'd older CPU,
    here are some CB R15 numbers for a 5GHz i7 2700K (RAM @ 2133 CL10, ASUS M4E):

    1-thread: 177
    N-thread: 880

    There's a power consumption difference of course, but the lower purchase cost of
    a used 2700K makes up for it by more than an order of magnitude.

    Ian.
    Reply
  • name99 - Sunday, May 11, 2014 - link

    "For longer cadences it makes sense to launch an improved product in the middle of that cadence taking advantage of minor production improvements."

    Nice save, Ian, but let's be honest here. This product is being launched for one, and only one reason --- Broadwell is delayed, and this is the best Intel can do to fill the gap and quieten the anger from customers (like Apple) who've had to delay all their plans because of the Broadwell slip.
    Reply
  • GuardianAngel470 - Sunday, May 11, 2014 - link

    On page two: "At this point in time it is clear that the i7-4670K and i7-4770K models do not have refresh counterparts..."

    That first i7 seems to be an i5 in disguise. You may want to beef up your internal security, it seems you have been discretely infiltrated.
    Reply
  • Ian Cutress - Monday, May 12, 2014 - link

    Nice catch :) Fixed! Reply
  • meacupla - Sunday, May 11, 2014 - link

    I only care about that 20th anniversary edition Pentium Reply
  • hojnikb - Monday, May 12, 2014 - link

    Yep, me too :) Reply
  • Ramon Zarat - Sunday, May 11, 2014 - link

    LMAO... My ASRock Z68 Extreme4 GEN3 from 2011, 4 generations behind this Z97, offers me 98%+ of the functionality and speed!

    Ok, I have a moded BIOS to get my 2 X SSD to run in RAID0 (man, I hate artificial market segmentation) and get well over 900MB/s sequential read, but beside that, NOTHING revolutionary from Intel Z97 to make wish to upgrade from what I already have!

    Already got plenty of USB3 ports with 4 total (only a handful of devices can actually use full USB3 speed anyway), the PCIe lanes from the GPU are already GEN3 thanks to ASRock GEN3 series, I have a much more flexible I/O hub/switch (PLX PEX8608) for the 8 X PCIe 2.0 lanes from the south bridge, so EVERYTHING is concurrently *LIVE* (gigabit Ethernet, USB3, *ALL* PCIe lanes etc...), so need to choose a limited setup and call it "flexible" LOL...

    2 X SATA 6Gb/s for my 2 SSDs is enough, as the other 4 SATA 3Gb/s are also more than enough for mechanical hard drives. I don't see HDD busting the 3Gb/s barrier anytime soon. I even have a Marvell SATA controller on top for either e-SATA or 2 X internal optical drives. Also, I'm already booting from SSD, so the PCIe SSD booting mean nothing to me. From cold start to login screen in less than 25 seconds and I always put the computer to sleep anyway, so boot time is actually less than 3 second 99% of the time so that EUFI fast boot also means nothing to me.

    With a 4.7Ghz quad core CPU and 16GB of 1600Mhz CL8 RAM, I'll keep this rig for a long, long LONG time! The only upgrade I see in 4-5 years is maybe 2 larger SSDs and a maybe a new DX12 video card when they become cheap AND plenty of games requires DX12. We have been GPU limited for a long time now and I don't see that changing 5 years from now where my current CPU will still be more than enough to push a DX12 GPU @ 1080p (I won't switch to 4K resolution before my next PC in 7-10 years from now).

    The desktop might not be dead, but it surely reached a point where it's so powerful, it's good enough for so many things you do, you actually don't need to upgrade every 36 months anymore.

    For example, MP3 are now converted practicality instantaneously and HD content only take a few minutes. You can do HUGE spreadsheets calculation is mere seconds, Photoshop effects as well etc... One of the only thing still too CPU intensive is Hollywood grade 3D rendering and for that, we now use rendering farms with GPGPU, local or in the cloud, tens of thousand of time faster than any desktop.

    I have the feeling my next PC will not be a silicon based technology!
    Reply
  • wetwareinterface - Sunday, May 11, 2014 - link

    you have 2x ssd's in raid 0 claiming 900MB/s and your boot time tot login screen is under 25 seconds?

    i have a single older samsung 240 non pro/evo 250 GB drive and my boot time to desktop with all drivers loaded and internet connected is, after manually logging in btw, around 15 seconds.

    maybe you shouldn't dismiss an upgrade too quickly
    Reply
  • wetwareinterface - Sunday, May 11, 2014 - link

    that's a cold boot time also Reply
  • Ramon Zarat - Monday, May 12, 2014 - link

    Well, that's 1 big difference right there; you don't boot in RAID mode, therefore you don't have the ~5 seconds RAID BIOS screen to go through as I do! Single SSD drives are practically always faster to boot compared to RAID0 SSD for that reason alone, but:

    I have a lot of stuff installed on an old (3+ years) Win 7 install. That makes the boot take more time. Also, the boot process involve a lot of small files, making the RAID0 less efficient (files smaller than the strip size are loaded from 1 drive instead of 2) and finally, my older Crucial M4 single drive performance is slower than your more recent Samsung that I guess is the 840 (AFAIK, there is no such thing as the Samsung 240 SSD), especially for writing. You don't mention it, but Windows 8 usually boot faster than the 7 that I use. My guess is you have a fresh Windows 8.1 install. All this explain the other ~5 seconds from my 25 to your 15.

    One thing I can assure you, I'm launching games and app with larger sequential files size a lot faster than your single Samsung drive, especially because I also use a 5GB RAM drive for all my system tmp and temp folder (incredible speed boost for Photoshop Scratch disk for example). CrytalDiskMark doesn't lie.

    I guess my point is, 90% of my SSD access patterns are medium/large size reads, not writes (install game/app once, play/use hundreds of times) and I re-boot my PC once every 2-3 weeks or so to "refresh" the system from a clean cold boot and it take only 25 sec. I sneeze 2-3 times in a row and I miss the boot process entirely! This is nowhere near the "it's so time consuming, I MUST upgrade" scenario. Every other time I boot, which is 95% of the time in fact, it's 3 seconds from sleep... So no, I really don't *NEED* to upgrade.

    I once endured stuff like 5-10 *MINUTES* boot time with Windows 95-98, so I'll go along with 25 seconds just fine! I used to power up my PC in the morning, then getting my coffee and 2 toasts, and when I done eating my breakfast, the PC just made it to the login screen! :) Ahhhhh, the good old days of running Windows 95 from an AMD 486DX/4-120 and a 5400RPM HDD!

    Just like I've said, I won't need to upgrade for a long, long time!
    Reply
  • Flunk - Monday, May 12, 2014 - link

    That's your problem right there, if you're running Windows 7 that's going to kill your boot time right there. Even my laptop that only has a Sandforce mSATA drive boots in 7 seconds using Windows 8. Reply
  • Ramon Zarat - Monday, May 12, 2014 - link

    Whatever man... :/ You're happy with 7 seconds? I'm happy for you.

    Then again, this is such a *HUGE* issue, right? I mean, 25 seconds boot is a REAL *problem* right? But to think of it, 7 seconds is also wayyyy too long, right? This is why 95% of my boot are 55% faster than yours, in only 3 seconds flat from sleep. But when you realllllly think about it, 3 seconds is *STILL* outrageously slow!

    Can you imagine just how fantastic life will be in 10-15 years from now when PC boot in 1 or 2 ***femtoseconds***? Ahhhhh, can't wait because I'm soooo unhappy, super sad and totally depressed with my current near eternity 25 seconds. :(
    Reply
  • MrSpadge - Monday, May 12, 2014 - link

    Sounds like Win 8/8.1 vs. Win 7.. with you using the newer one. Reply
  • mapesdhs - Sunday, May 11, 2014 - link


    Also found this elsewhere for CB R15, i7 4770K @ 4.6GHz, RAM @ 2133 (system
    owner using a Corsair H110 though to handle the heat, whereas my 5.0 2700K is
    runing happily with a boring old TRUE and two simple fans):

    1-thread: 183
    N-thread: 933

    Ramon, indeed, I've been arguing elsewhere for a while now that one of the
    reasons the desktop market has been fading is because - for the enthusiast
    crowd - there is nothing worth buying. The rise of tablets and suchlike is
    certainly a factor, but the media doesn't look beyond this narrow focus.
    Enthusiasts (by which I mean those who want top-end gaming systems or setups
    for oc'ing and/or breaking benchmark records) are often the ones with serious
    money to spend (one shop owner in CA told me 40% of his income is from
    customers of this kind), but they're also more likely to already have good
    systems. Anyone with an oc'd 2500K or better config will see little real speed
    boost from any newer setup (even an oc'd i7 870 holds up quite well to newer
    CPUs, eg. gives a better 3DMark11 physics score than a stock 3570K).

    Intel isn't producing significantly better desktop CPUs because it doesn't
    have to (no competition), but IMO they're making a big mistake, because in
    time the lack of useful CPU upgrades will have knock-on effects elsewhere. We
    already see CPU bottlenecks in numerous GPU reviews (that's why reviewers keep
    using oc'd CPUs to test newer top-end GPUs), a situation that's going to get a
    lot worse in the next year or so.

    At some point, gaming-focused dekstop customers are going to realise that
    buying better desktop GPUs is pointless because the CPU can't support them
    properly, which will hurt GPU sales. I'd expect NVIDIA and other corps to be
    distinctly unhappy with such a scenario, and what about movie companies are
    others who demand ever greater compute performance? Some tasks can be done
    with GPU acceleration, but many cannot.

    IMO the people with the real money to spend in the desktop PC market are
    similar to those in the hifi market who care more than most about audio
    quality, the kind who always purchase separates; they don't constitute a large
    group in terms of customer numbers, but the monetary pool they represent is
    disproportionately large.

    I've read a lot of responses elsewhere to my suggestions from people who point
    to better SATA3/USB3 support in chipsets after P67 (and they're right), but
    the reality is most people don't need such features and won't notice the
    difference.

    Meanwhile, with Z97 Intel continues using a small no. of CPU-based PCIe lanes,
    which means the resulting mbds can hardly be regarded in the same enthusiast
    vein as X58 was when it launched. Intel could have attracted the enthusiast
    crowd with money to burn to X79 with a chipset refresh that updated SATA3,
    etc. (easy to do, just offer an 8-core CPU and/or an unlocked XEON), but they
    didn't bother. I'm sure there's an untapped market out there of people who'd
    love to buy something new that really kicks performance up a level from what
    they already own, but there's little point when one's current setup is an oc'd
    2500K, 2600K, 2700K or any SB-E.

    The fact that one can make these CPUs run perfectly ok at a much higher clock,
    without increasing the voltage, proves that Intel could offer something a lot
    better if they wanted to, but as others have pointed out it's clear Intel's
    recent focus has been far more on power consumption, etc., to target the
    mobile/tablet market.

    When it comes to CPUs, Moore's Law has been dead for at least 3 years.

    Ian.
    Reply
  • Ramon Zarat - Monday, May 12, 2014 - link

    I agrees with practically everything you said, but like I've said, I think we simply have reached a plateau where a 4.5 to 5Ghz quad (2500K and up, like you said) has enough speed to accommodate a very large number of desktop computing scenarios. There is not to many scenarios/apps that take hours or days of processing. Most of what you do on a modern PC is in the minutes or seconds range and more often than not, instantaneous.

    I used to wait long minutes to simply to apply an echo effect to a WAV file back in the days. I can now do the same on a 10 hours audio file in a few seconds, or apply the same effect to hundred of streams simultaneously, also in mere seconds. Now text to speech is instantaneous and of very high quality nearly indistinguishable from real humans! OCR is nothing for modern CPU. The list goes on and on...

    At some point, there is simply no return on the investment, most stuff are now done in real time or near real time. What's the point of blowing tons of cash to get stuff done 4.4 seconds faster? Use to take 7 seconds with you old CPU, now only 2.6 seconds with the 4790! Back in the days, cutting time from 3 hours to 1 was highly justifiable. Now reducing 7 seconds to 2.6, not so much...

    When we have a fully functional artificial intelligence brain application to drive, or game using ray tracing, then we'll need a 256 cores CPU running in the terahertz range or even a quantum computer. Until then, my quad is more than enough for Photoshop, Adobe audition/premier, Autocad, browse the net and run the Office suite. I can even stream 4K content on youtube, no problem!
    Reply
  • romrunning - Monday, May 12, 2014 - link

    One area where CPU upgrades can make a difference is in the workstation arena. CAD/CAM applications (ones like SolidWorks, etc.) can use more CPU muscle in addition to good workstation graphic cards. When you cut down the amount of time needed to create a certain 3D model anywhere from 25-75%, then that actually translates in $ saved - sometimes even tens of thousands of $ saved when you're talking about a shop that machines parts. The faster you can create & verify the 3D model, then the faster the shop can produce the part. Again, the time saved, even if you think only it's on an incremental CPU upgrade, can potentially equal a lot of $ saved as well. Reply
  • Ramon Zarat - Monday, May 12, 2014 - link

    Exactly my point.

    Nowadays, only very vertical scenarios requires and can justify the money and the upgrade. In the 286 day, buying the new 386 for 5 grand to get a faster spreadsheet was a valid argument. Not anymore. The Core 2 quad of 6 years ago is already 10 time overkill for 99% of the spreadsheet scenarios! You can now practically run a mini Hollywood studio in full HD for a few thousands box!

    Very complex scientific, mathematic/physic model, artificial intelligence, protein folding, weather system, fluid simulation etc... and 3D rendering, still requires all the computing power you can spare. How much of this desktop user do? Not much. Even simple to average complexity Autocad projects can be done a high end desktop, no need for a full blown 12K$ workstation as it USED to be the case.
    Reply
  • stephenbrooks - Saturday, May 24, 2014 - link

    Physics modelling, artificial intelligence and fluid simulation are being incorporated into games. So the gamers are pushing us forward :) Not an insignificant sized industry (billions) either. Reply
  • Antronman - Monday, May 12, 2014 - link

    Nehalem can't even bring a candle to the current performance. Don't make up stupid shit.

    You're not an enthusiast, so you're thinking like a normal consumer.
    Because enthusiasts have the money, they will always buy the new parts because they're new. Unless the new parts are worse.
    Oftentimes, there is some aspect of the newer parts that perform better. As an example, the new RoG Maximus VII Hero can easily pull 2933MHz clocks on RAM, whereas the older VI could probably get to between 2700MHz-2800MHz. That gives enthusiasts a reason to buy the Maximus VII Hero, even if they have the older, Maximus VI Hero.

    Moore's law has been dead since integrated processors stopped being a thing.
    Reply
  • Ramon Zarat - Monday, May 12, 2014 - link

    Nehalem came just before Sandy bridge and is STILL very relevant in the vast majority of desktop user scenarios. This is not the Pentium D we are talking about here FFS...

    Also, every single benchmark I've read clearly show that with current CPU architecture, beyond 22-25GB/s RAM bandwidth (which correspond to 1600-1866Mhz dual channel), there is simply no return on investment in games. 1-2 maybe 3 FPS max. Even worse, in some scenario it's actually *slower* because of the incredibly loose timing to get those insane clock speed stable!

    Again, only a few highly specialized and vertical solution will show appreciable and justifiable improvement beyond 1866Mhz / 25Gb/s. Even triple VS dual channel fail to show significant improvement in most real life scenarios, except useless synthetic benchmarks.

    As for my rig, except maybe the video card, I'm confident it easily qualify for "enthusiast level":

    Corsair Carbide 500R
    Seasonic X-560 Gold
    ASRock Z68 Extreme 4 GEN3 (BIOS moded for TRIM SSD RAID0 support)
    Cooler Master Hyper 612 PWM
    2500K @ 4.7Ghz 1.35V
    16GB 1600Mhz 8-8-8-21 1.5V
    Asus GTX 660 DirectCU II TOP (moded bios for 1.21V, 150% power target and 1.3GHz Overclock)
    2 X Crucial M4 128GB SSD in RAID0 (900MB/S sequential read)
    2TB WD Green HDD
    5GB RAM DISK (9850MB/S sequential read)
    Mushkin Enhanced Ventura Pro 64GB USB 3.0 (120MB/s sequential read)
    LG Blu-ray burner
    3 X LG 24in LCD
    Klipsch Promedia Ultra 5.1
    Sony ZX600 Headphone
    Ducky Shine DK9008S White LED Cherry brown switch
    Anker 5000 DPI, 11 buttons laser mouse
    Razer Onza Xbox 360 gamepad - Battlefield 3 Tournament Edition
    2 X Cyberpower CP850PFCLCD PFC Sinewave Series UPS
    D-Link DIR-655 gigabit router
    HP Procurve 1400-8G gigabit switch
    16TB FreeNAS NAS box in the basement (4 X WD RED 4TB)
    3TB USB3 WD My Book Essential for backup
    Dlink DIR-615 wireless bridge (moded with DD-WRT Firmware) To hook 3 old school PC to the network: P54C 120Mhz, P3-450Mhz and Tualatin 1.4Ghz.
    4 port KVM switch

    And don't tell me I need a dual socket, 12 cores, 24 threads, 64GB RAM DDR3-3000, quad SLI, water cooled setup on 2 X 1500W PSU and 3 X 27in IPS 4K LCD to qualify for enthusiast...

    Finally, "enthusiast" don't necessarily mean "stupid moron with an e-penis complex ready to blow insane amount of cash like it's the end of the world on the latest hardware for no apparent reason, just to brag about 1FPS and 1MHZ more in every forums." I love technology and I like my money to stay in my pocket for as long as possible. When I make a move, I want the best and I want it to last. I guess I'm just an enthusiast, but with a brain still intact and functioning properly! :)
    Reply
  • Antronman - Tuesday, May 13, 2014 - link

    http://www.bing.com/search?q=define+enthusiast&...

    Based on that comment, I can tell you are not deeply involved in the construction of computers or extreme overclocking.
    Reply
  • mikato - Thursday, May 15, 2014 - link

    Antronman- your stupid Bing link does not say that being an enthusiast means someone deeply involved in the construction of computers or extreme overclocking. Give me a break, master lamer. Just throwing out a link doesn't make something true. Pretty much everyone reading this article is an enthusiast. Not just that, but he is commenting and listing his computer specs. Come on. Go spread your lame BS somewhere else because I'm all out of patience for it this morning. Reply
  • royalcrown - Wednesday, May 14, 2014 - link

    The one thing I'd upgrade on yours is the 660. Even coming from a 680, I was surprised at the difference between that and my current 780ti. IMO go for a vanilla 780, you'll be pleased I bet. Reply
  • mikato - Thursday, May 15, 2014 - link

    "Because enthusiasts have the money, they will always buy the new parts because they're new."

    Nope. That is some kind of warped perception of reality. Do enthusiasts require the newest part to get through life? Do enthusiasts never build a computer to last a few years? Do enthusiasts have unlimited money, or pretty much care about nothing else in life besides the newest computer parts?

    Please provide bing link to confirm your answers, lol.
    Reply
  • jamescox - Sunday, May 11, 2014 - link

    Not much of interest with this refresh. For most consumers, anything in the last few generations of cpus offer sufficient processing power. I don't think this is going to change until we get a major form-factor change to something more gpu centric. The overclocking chips coming out later may be of interest, but I don't know if I will buy one. I have been wondering if anyone will integrate a thin vapor chamber instead of just a "lid"; this seems like it would handle hot spots and such, but it may not be worthwhile. Reply
  • Samus - Sunday, May 11, 2014 - link

    If my brand new H87 board doesn't run broadwell in 6 months, I'm going back to AMD on principle. Not since the 965/975x has a sequel processor not supported the previous gen chipset with the same socket (in that case, the Intel 30 series chipset, which supported 1333FSB.) That was 7 years ago.

    If Broadwell is simply a die shrink, why the hell would they abandon millions of 80-series motherboards other than to alienate people back to AMD?
    Reply
  • KAlmquist - Monday, May 12, 2014 - link

    Actually, you are wrong about that. The B65, Q65, and Q67 chip sets only support Sandy Bridge, not Ivy Bridge. Reply
  • Samus - Monday, May 12, 2014 - link

    Ohh wow yeah...damn intel are bastards with this crap. A new chipset just to support a die shrink? Reply
  • Ramon Zarat - Monday, May 12, 2014 - link

    Artificial market segmentation is indeed an highly anti-consumer business practice, especially when abused to the extent Intel do it. Reply
  • hasseb64 - Monday, May 12, 2014 - link

    5 years ago, this release would have been a single news flash not an article, not blaming the sites, because there are no news/momentum in DIY-PC anymore. Reply
  • milkMADE - Monday, May 12, 2014 - link

    I think you meant i7 4790k...as the non k is 4790.

    "To that end, Intel is going to release ‘Devil’s Canyon’ in due course. Devil’s Canyon has no official SKU name yet (i7-4970K or i7-4770X are my best guesses)"

    4970k would make me think the successor to the 4960x ivybridge-e 6core.
    Reply
  • AndrewJacksonZA - Monday, May 12, 2014 - link

    If you didn't clarify "Human limited" and "CPU limited" I would've understood them the other way around:
    "Human limited" has the human being as the slowest part of the system with a very fast CPU.
    "CPU limited" has the CPU as the slowest part of the system.
    Reply
  • Harry Lloyd - Monday, May 12, 2014 - link

    Testing performance of these CPUs is completely pointless. The only interesting thing about them is power consumption and thermals. Reply
  • geok1ng - Monday, May 12, 2014 - link

    The benchmarks of the Xeon 2687 v2 8c/16Ht imply that the upcoming Extrem Edition haswell-E CPU will be a landslide. 5960X can't come soon enough. Reply
  • Antronman - Monday, May 12, 2014 - link

    If the 5960x will be as bad as the 4960x, the 5930k can't come soon enough either. Reply
  • milkMADE - Monday, May 12, 2014 - link

    too bad the only sku with 8-cores to give it that landslide are the EE $1000 5960x Reply
  • geok1ng - Tuesday, May 13, 2014 - link

    a xeon 1680 v2 , single socket 8c/16ht costs $1887, and afetr 6 months we still do not know if it is unlocked like the 6cores 1650/1660. So $1000 for a 5960x is not outlandish Reply
  • Chrispy_ - Monday, May 12, 2014 - link

    Looking at the Z97 vs H97, why are small businesses not allowed to use SRT?
    Such dumb.
    Reply
  • Antronman - Monday, May 12, 2014 - link

    How many fucks I give: 0. Reply
  • StrangerGuy - Monday, May 12, 2014 - link

    Intel can spam a million SKUs left and right while pretending it's relevant like 10 years ago and it still doesn't mask the fact there are only like 2 chips @ $50 and $220 that makes sense for 90% and 9.99% of desktop users respectively, and most of them are well served by chips sold 3 years ago. Reply
  • Antronman - Monday, May 12, 2014 - link

    Actually there are more "sensible" enthusiast chips. For example, if I am running a 4k setup, and I also record gameplay and upload it to youtube, I want something notably better performing than the 4670k. And I would say that much larger .01% of the computer-using community is pro overclockers and video editors and scientists and software developers.

    There's so many niches these days, a much greater percentage of people needing more specific solutions is present than most people think.
    Reply
  • MrSpadge - Monday, May 12, 2014 - link

    Am I the only one to notice that especially in the first benchmarks the 4790 often outperforms the 4770K by ~6%, sometimes even by 11% (e.g. the very first benchmark)? Veryfiying an expected lack of improvement is nice, but not even commenting on such discrepancies seems.. not like usual Anandtech quality. If I were you I'd repeat those benchmarks and - if verified - would try to find the reason for the 4790 (sometimes) performing significantly better than expected. Reply
  • stephenbrooks - Saturday, May 24, 2014 - link

    Yes, that's weird: the improvement is greater than the clock bump. Reply
  • RickyBaby - Monday, May 12, 2014 - link

    So, would like to thank Ian on another job well done. A thankless job no doubt, as per the discussions this was a purely marketing driven release and offered essentially nothing of value. I did have one question though. And as background, I, like it appears most here are running and older, home built rigs like I am. Rigs built several generations ago which compete very well with today's rigs.

    I am an over-clocker from way back and like to squeeze out a little extra performance ... just to know that I haven't been cheated, lol. JK. Anyway, I'm thinking of buying a i3 and overclocking it what little I can or even go for an i5 - K chip. So, what overclocking options are available any more. ANY ???

    The multi-core enhancement question for K processors was not answered in this article. Does it work on the 97 chipset with the new haswell's or not ??? And the does the limited bclk overclocking work ? Again, i3 (locked) and 97 chipset, does it work or not ? I'm sure that I'm not the only to realize that a 3.7 i3 with a bclk of 108 would be a cool 4.0 ghz. And that my friends is locked at 4.0. 1 cpu, 2 cpus, and even 3 and 4 via ht and all at 4.0 ghz. Since i5s and i7s throttle down depending on # CPUs being utilized ... up until more than 4, it seems that an i3 would kick ass. And here is where most could agree. Do I do ANYTHING that requires more than 4 cpus, 100% clocked at 4 ghz for large stretches of time ? NO. I. DO. NOT. And the difference in cache between an i3 (4mb) and an i7 (6mb) is again pretty much meaningless. Anyone know the diff in hit rates? 96% vs 97% ? So and even slightly overclocked i3 would make for an enticing $ proposition.

    But this all gets ignored. Ian, care to address the overclocking situation in a follow up article ? And no, I'm not referring to Devils Canyon or the unlocked Pent; which is core limited (2, no HT) and cache limited. But for people who buy an non-K haswell. What options do they have and does it make a real difference ? I do wonder what intel would think if an i3 overclocked just a little bit would out perform every stock i5 and i7 though. Probably not a happy camper.
    Reply
  • Antronman - Tuesday, May 13, 2014 - link

    K series is core unlocked. Overclock it or underclock it to any clock rate you want.

    Non-K is core locked, you can only use turbo boost which is a very small increase in clock.
    Reply
  • RickyBaby - Tuesday, May 13, 2014 - link

    Sorry but you didn't answer the question. Can anyone ? Or is this some sort of no-no answer that everyone is supposed to give to satisfy the GIANT in the room. That GIANT of course being Intel.

    So why the confusion? Here is why, Toms just did a roundup of some of the new mobos and addresses overclocking. On that section (page 23 I believe) there are 3 charts. The middle chart gives the maximum base clocks for each of the motherboards on each of the 3 strap settings. The Gigabyte mobo reached a bclock of 114 with the strap set to 100. Which is the default and is unchangeable on non-k chips. You cannot change the strap but I do believe that you can change the clock. Would that not be a 14% overclock. If not, why not. The comment on the the Tom's page again seems to imply but not out and out claim you could overclock ANY locked CPU by 14% using that board. Here is exactly what he said:

    "the Z79X Gaming 5 reached the highest base clock frequency when using the 100 MHz strap. That’s the only ratio available on multiplier-locked processors, so this might be important to anyone running the new Core i7-4790"

    Why would that be important ? Because the 4790 is not a K and is therefore locked. But you could still overclock it by 14% .. which is how I read that.

    And Tom's again muddies the waters. In that same article in review of the ASUS board on the 3rd page it talks about bios settings for overclocking and the use of the XMP setting. Again a quote:

    "Unless you're using a K-series CPU, overclocking is limited to a handful of 100 MHz speed bins over stock. So, we reverted to our Core i7-4770K to test it."

    So does XMP, multi-core enhancement still exist and still work for NON-K cpus ? Apparently it does. With the latest chipset (97) and latest chips (4th gen) too.

    Sorry but I wish someone would just come out and say it. If your board supports BCLK increases then you can overclock to that amount. Not that is not a lot; most boards i've seen are 5%-7%. And if your chips supports Turbo then XMP/Mult Core Ehancement is alive and well too. So take an i5, increase bck by 10% and lock your Max Turbo using XMP and you'd have a decent overclock of 15%-20% under a heavy load.

    I'm just waiting for someone (like Ian) to confirm/deny that the above is true.
    Reply
  • MrSpadge - Tuesday, May 13, 2014 - link

    Ricky, your concern is very valid! When Haswell launched I had hoped to get i7 4770R with Crystalwell L4 cache and to be able to set all cores to 3.9 GHz (max single core turbo), unlimited power consumption and a BCLK of 102.5 - 107.5 MHz for 4.0 - 4.2 GHz. This could run at very energy efficient ~1.00 V and would outperform pretty much every other quad core if the L4 works well (>10% performance per clock) and otherwise still be decent. Without any heat problems (power consumption would probably be below a stock 4770).

    BUT there were reports of multi core enhancement not working for non-K models. Which renders such a plan useless, before I even get to the point that you can not buy 4770R soldered onto a regular mainboard (just those notebook-expensive mini boxes).
    Reply
  • JokerProductions - Tuesday, May 13, 2014 - link

    Still the same horrible 1150 socket, but now with a 2% performance gain. Yay! Still waiting on X99 and my 8 core. Reply
  • eanazag - Tuesday, May 13, 2014 - link

    Finally some bench love. I see the scores are in the bench. I'd like to see some updates to what AMD is still selling as far as the bench apps tested. Like comparing the FX 8350 with the new Haswells gets rough with limited apps that line up. Reply
  • The|Hunter - Tuesday, May 13, 2014 - link

    btw Intel confirmed Broadwell and Z87 compatibility back in September 2013;

    One intel ceo/rep said: "Broadwell is going to enable 2 types of devices, One you can plug the chips directly into existing systems (z87) and Second we will have brand new systems with broad new range of fanless designs.."
    https://www.youtube.com/watch?feature=player_detai...

    *z87 and devil canyon needed IMEI uefi firmware update..
    Reply
  • jjjag - Wednesday, May 14, 2014 - link

    Desktops ARE dying. Stop saying they are not. Desktop silicon is exactly the same as mobile silicon, and has been for several generations. The number of SKUs that are sold into desktop has been rapidly declining, driven by the declining number of desktop computers that are offered by people like Dell and HP, which of course is driven by demand.

    HEDT, a.k.a. "Extreme", is different silicon than mobil/desktop since the 2nd gen. Core parts. These are server parts that are de-featured and re-badged as desktop. The volumes are too small and declining to justify these for much longer.

    To answer another's question: you will not see a desktop with Iris Pro until Broadwell. Unless you count that little Gigabyte box that uses the Haswell Iris Pro.

    To respond to another. Intel is not sabotaging anything. To believe that is ignorant. It's all driven by demand. Once demand drops below a certain level, it does not make business sense to sell a particular part.
    Reply
  • stephenbrooks - Saturday, May 24, 2014 - link

    I guess in the event that desktops die, I could attach my two 23" screens, keyboard and mouse to a laptop dock instead, because that's more futuristic or something. Reply
  • mikato - Thursday, May 15, 2014 - link

    "the limiting factor is the technology between the keyboard and the monitor: the user"

    Uhh, the user isn't between the keyboard and monitor. The user is on the end of a branch, past the keyboard. Maybe in the future... :)
    Reply
  • jayshank7 - Thursday, May 15, 2014 - link

    I bought 4770K 2 months back so won't be getting anything before 2016 to be honest.. i may build broadwell based i5 system but my main 4770K based rig would be here with 3 x 280X Toxic for those years.. Reply
  • HardwareDufus - Sunday, May 18, 2014 - link

    Looking forward to a Broadwell based I7-5790K with IrisPro (HD5200+) Reply
  • Krysto - Wednesday, May 21, 2014 - link

    The only reason there's even a "Haswell Refresh" is because Intel blew it with Broadwell, and got delayed by a whole 6 months. In 2015 they will have most of their process advantage, and will be only 6 months ahead of TSMC, once it gets FinFET at 16nm, compared to IVB/Haswell/Silvermont where it had a generation/node and a half advantage. Reply
  • Be Careful - Friday, May 30, 2014 - link

    Hey technology nuts I would like you to read this:
    http://www.jimstonefreelance.com/corevpro.html
    Reply
  • deruberhanyok - Monday, June 02, 2014 - link

    I might be late to the party, but, on page 9, the bioshock infinite benchmark charts - is the second one mislabeled? The minimum frame rates? It seems to be. Reply

Log in

Don't have an account? Sign up now