The AnandTech Podcast: Episode 22

by Anand Lal Shimpi on 7/19/2013 10:29 AM EST


Back to Article

  • andykins - Friday, July 19, 2013 - link

    2 podcasts in 1 go? You guys have just made my weekend... Reply
  • Zink - Friday, July 19, 2013 - link

    I like the smaller file size and breaking the podcast up by topic is a good idea. Reply
  • klagermkii - Sunday, July 21, 2013 - link

    I like the breaking up by topic, but I think it helps to have more than just two contributors. Even if the other guy is sitting out for a lot of the time, here I think it would have helped to have had some of Ian's thoughts. Having more people also give the individuals involved more time to gather their thoughts while the others are talking, and that might relieve some of the pressure and ease any nervousness. Reply
  • ThomasS31 - Friday, July 19, 2013 - link

    My concern is not just only overclocking issue with the heat spreader/paste check... but also that the general temperature of the core under constant stress is increased and you are not able to remove that heat vs SandyBridge.
    Its a special case I know, but when you run heavy FP/AVX load its an issue.

    Also I think the general price competetiveness is worsened, over time.
    What is the reason for this is not clear, intel wants more margin or the shrinking PC market, or just lack of competition???
  • Callitrax - Friday, July 19, 2013 - link

    Couple things on Haswell:
    Its not only Intel's mobile and desktop core (where mobile is winning out in design choices), it is also the workstation, server and HPC core, so the mobile improvements at the expense of top-end performance hurt there too. So Intel's design choices in favor of mobile strongly limit high end computing the more than just the home enthusiast. (Yes the Xeons are different cores, but that mostly number of cores and cache size, it will still just be Haswell at the core - but some workloads respond better to thread performance than more threads.)

    And I think Dustin's complaints basically comes down to the fact that performance increase isn't big enough, Anand keeps talking about IPC, but IPC is only half of the equation. Performance isn't a function of IPC, its IPC * clock speed (instructions per second). The problem on the desktop is that Haswell's peak performance isn't a big deal over Sandy, not just overclocked, but at stock speeds (which haven’t changed at all). Looking at the Haswell test results, Intel’s performance is up 15-20% in 2.5 years (Sandy release to Haswell.)

    The question that Anand asked “what would you recommend for a new system?” Yes the answer is Haswell. But would you upgrade from Sandy? It’s probably not worth it. That's probably the biggest reason why desktops sales are down, no need to upgrade.

    The Mac Pro has another area where it has market share besides creative and that's in scientific usage (lack of POSIX in Windows hurts.) I'm sitting in a room with 4 Mac Pros. Apple's design choices for the Pro are problematic for this market because large amounts of those programs are either CUDA based which the FirePro can't run or CPU based where the single CPU design is a problem. (And the people running the programs aren't programmers and aren't paid to deal with the programming hurdles to convert them) The new Mac Pro would probably be a regression for some of our workloads due to the reduction in CPU cores even with faster speeds.

    As a side, for a funny read, I came across page 2 of Anand’s coppermine review recently, the mobile processor strategy for Intel makes for a fun read from today’s point of view.
  • watersb - Tuesday, July 23, 2013 - link

    From 2000 to 2010, I watched Macs take over the annual ADASS conference: Astronomical Data-Analysis Software Summit. (science software? astro-data-something-something...) Initially, we all used Xeon-based workstations, running Linux, for "real" work... Budget constraints push for only one computer per person, which could work against Apple (too expensive) or for Apple (best-of-class notebooks for POSIX work. Hmm.

    A while back I stumbled upon a claim that any supported GPU in a Mac Pro is basically a "pro" product -- as the key differentiator is the driver stack. Ok, but aren't there real differences between FirePro, Quadro cards, particularly double-precision floating point?

    I only got around to playing with a first-gen Tesla C1060 in my Mac Pro before I sold it.The C1060 lacks full hardware support for double-precision. So I have no clue.
  • Slowman61 - Friday, July 19, 2013 - link

    I think Apple could have some good success with the Mac Pro if they added a version with an Ivy-E and used Titans. A lower price point with a much broader and more forgiving audience, that would love to have something with more umph than their IMacs. Shoot, if they made one with an I7 and GTX770s I might even buy it. Reply
  • Kevin G - Friday, July 19, 2013 - link

    I'm with Dustin with a lot of criticism towards Haswell. Designing for high IPC and high clock on modern processes is possible - IBM is doing it with their server line. Intel has been focusing on mobile with their core designs and that has nerfed the desktop line.

    Not mentioned in the podcast is that the desktop users have been burned by constantly having to purchase a new motherboard alongside a new CPU. Socket 1156, 1155 and now 1150. I'm currently on a Sandybridge i7 2600k and would like a drop in replacement to something faster. Ivy Bridge i7 3770k isn't worth the money for the very marginal improvements. Haswell has some IPC improvements but the cost benefit there is even worse since I'd have to get a new motherboard to support it.

    I'm also frustrated that there is no GT3e chip in socket 1150. The L4 cache does improve performance, and radically, in some cases. Tech Report did some scientific tests where the i7 4960HQ was challenging an i7 3970X. That mobile chip was competitive against a chip with two more cores and 50% higher clock speeds. I'm also irked that TSX and VT-D have been disabled on the K series parts. I'm flat out livid that Intel has no plans for a socketed Broadwell.

    The Mac Pro has suffered because Apple has ignored the market that this serves. Prior to the announcement this summer, the last time Apple really updated the line was back in 2010 when they just added Westmere support to the 2009 models. Three years! I want expandablity: my Macs traditionally have had an additional IO card and goes through one video card upgrade. Yeah, I'm an owner of the oddball PCI-E based quad G5 and that machine had more than one video card. So what does the new Mac Pro bring for its lack of expansion? It still goes to 12 cores, just in a single socket instead of two in the 2010 model. CPU clock speeds are going to be similar so the raw performance gains will come from IPC improvements from Westmere to Ivy Bridge-E (roughly 8%) and IO improvements. AVX is there but that requires getting/recompiling software. Memory is still DDR3 and the capacity increase on the new Mac Pro is due to Apple using registered ECC memory instead of vanilla ECC DIMMs. Of course, the Xeons in the 2010 Mac Pro supports registered ECC DIMMs but Apple artificially disabled that feature. Oh, and if you want to upgrade the memory in the new Mac Pro you are going to have to throw out the old memory and replace it - there are no free DIMM slots. The FirePro cards are nice but by the time the Mac Pro reaches the market the Radeon 8000 series should have arrived. Thunderbolt 2 is a bit of a joke - the bandwidth per channel increases but they also halved the channels. If channel bonding worked correctly, Thunderbolt could drive an 8K display at 30 Hz. And despite all the bandwidth, Thunderbolt doesn't provide the bandwidth for a modern GPU, at least without a performance hit. Apple has nerfed my plans for GPU expansion over the life of this system. The one part that really shines as an improvement is the PCI-E based SSD. It'd be nicer if there was more than one SSD slot for expansion down the road but this should suffice for awhile.

    I've literally been saving up for a personal Mac Pro since late 2011. Now I'm facing the choice of getting the machine I could have gotten in 2011 and curse myself for two years of waiting or get the new compromised system. I'm going to hate myself either way.

    The problem with socket 2011 lasting two generations is that the X79 chipset is buggy. Remember the roadmaps where it was supposed to have SAS ports? And at 6 Gbit? Yeah, that was neutered for the consumer versions and select server versions only got SAS 3 Gbit. Intel could just fix the existing chipset for less cost than designing one from scratch (perhaps just a respin?), rebrand it, and make the high end crowd happy.
  • Death666Angel - Saturday, July 20, 2013 - link

    "Westmere to Ivy Bridge-E (roughly 8%)" Where do you get that number from? Should be quite a bit higher. Also, IVB-E has support for some more extensions like AVX. Reply
  • Kevin G - Sunday, July 21, 2013 - link

    I'm not expecting the 12 core Ivy Bridge-E's to carry an aggressive clock speed compared to Westmere. I'd be surprised if the 12 core Ivy Bridge-E reaches the same clock speed as the top of the line 8 core Sandy Bridge-E (2.8 Ghz). I'd fathom a 2.0 Ghz default clock on the new Mac Pro with a 2.7 Ghz top of the line option. For reference the 12 core 2010 Mac Pro is 2.4 Ghz stock with a 3.06 Ghz high end option. Thankfully IPC gains and IO improvements will counter the reductions in clock speeds. I'm not counting on Ivy Bridge's turbo to be of much use in the Mac Pro's new chassis as I have a feeling that it'll run a bit hot by default there. (This will especially true under CPU + GPU loads due to the new Mac Pro's shared heat sink.) Due to the differences in clock speeds, some software will likely run faster on the older 2010 Mac Pro if the software can't use Ivy Bridge-e's improvements to IPC. I'm optimistic that this case will be rare.

    One thing I forgot to mention is that the new Mac Pro actually has less memory bandwidth than the current 12 core Mac Pro. The second socket with three more memory channels helps out a lot here.

    I also mentioned AVX but to take advantage of it existing software needs to be recompiled. This is where Apple's claims are coming from when they mention double the performance of the previous Mac Pro. Users expecting this universally are going to be very, very disappointed.

    Overall the new Mac Pro's do not seem attractive from a CPU stand point. Going from two to one socket kills much of the Mac Pro's luster. Most of the improvements (6 core Westermere -> 12 core Ivy Bridge-e, six channel DDR3 1333 -> quad channel DDR3 1866) are just negating the loss of that additional socket. Luckily Apple never released a Sandy Bridge-e system in the 2010 chassis as the new Mac Pro would clearly be a step backwards from that hypothetical system.
  • Computer Bottleneck - Friday, July 19, 2013 - link

    Regarding the B-clock....

    Yes, It would definitely be great to have that feature on Non-K parts. (particularly with the budget LGA and BGA chips where the base clocks are still plenty low enough to provide lots of overclocking headroom.)

    In fact, just thinking about this more I'd imagine the cost would be quite good on building the entire system. (re: No need on a responsibly OC'd 22nm dual core for a large aftermarket CPU cooler or lots of VRM phases on the motherboard, etc etc etc.)
  • gxtoast - Friday, July 19, 2013 - link

    So, given that the silicon is generating more heat per square millimeter (result of increased die density), why did Intel choose to use a lesser heat-transfer medium between the CPU die and the package? Reply
  • Soulwager - Friday, July 19, 2013 - link

    There are two parts to the heat dissipation issue, the most obvious half of it is the interface between the die and the cooling solution, which is partially due to the paste being used instead of solder, and partially due to inconsistency in the size of the gap between the die and the IHS. The second half of it is the thermal conduction between the circuitry within the die, and the surface of the die that's contacting the cooling solution. Yes, the surface area here is finite, but if that was actually a problem, it could be solved with additional backgrinding, and a better heat spreader(pyrolitic graphite for example).

    The problem isn't technical, it's simply a lack of competition. I still don't understand Intel's motivation for disabling TSX on the i7 4770k. What is preventing Intel from tacking on an extra 50-100 bucks for a fully functional die?
  • dragosmp - Saturday, July 20, 2013 - link

    I'm with Dustin on the dispointment in Haswell and Ivy because of the performance plateau (at the OC sweetspot). If one would build a completely new system and were in need of a high end chip I too would advice Haswell, but to upgrade from Sandy or Ivy, there's just so little benefit.

    If X would say "I'm constrained by the mITX case @50W", then even an upgrade from Ivy to Haswell may bring performance benefits, but for the most part that's a fringe case. In any other upgrade case I find the cost of passing from SB or IB to Haswell excedes the minute benefits of the architecture, moreover since the best features are either botched (QS), disabled (TSX), unused for months to come (AVX2) or not even on the desktop (Crystalwell)
  • ZeDestructor - Saturday, July 20, 2013 - link

    Nice podcast, but I feel that you guys have completely missed one possibility that intel might go for, which has been in speculation for some time now: quitting the "mainstream" socketed desktop market entirely.

    Personally, I feel that this could work out quite well: let's face it, most people are buying laptops these days, and all-in-one machines work better with mobile chips anyways. Intel could unify the mainstream desktop and laptop SKUs quite easily, and even keep sockets (most non-ultrabooks machines still retain the good old socket due to the variety of specs they have on offer), although I suspect that even if they did such a thing, they'd go for BGA parts simply to reduce SKU count... afterall.. how many CPU options do you need when building machines on identical boards?

    Meanwhile, at the high-end unlocked are Intel can offload some lesser-quality Xeons (disabled cores and QPI, so who cares, really...) fairly easily and keep us enthusuiasts quite happy - in hindsight, I'd have gone for the i7-3820 instead of the i5-3570K for my build late last year.

    In addition, intel could also stop releasing large-die server parts on every cycle, instead do so on every tock: how much power-efficiency do you need on a server? Let the mobile parts do the heavy validation testing of the process, then launch your big IPC tweaks across the entire lineup...
  • Death666Angel - Saturday, July 20, 2013 - link

    "in hindsight, I'd have gone for the i7-3820 instead of the i5-3570K for my build late last year."
    Why? If you need 8 threads, get the i7-3770K. Unless you need the memory performance/size or the additional PCIe lanes, there is no benefit. The money you save with the CPU you have to spend extra on the mainboard. And there are far fewer options available for that as well. OC wise, I see no better results for the 3820 than the 3770K.
  • ZeDestructor - Saturday, July 20, 2013 - link

    I'm planning to drop in a second GTX 670 to power the triple-screen setup and already planned on 32GiB of RAM (very nice for compiling stuff). Factor in all of that, and moving from Z77 to X79 wasn't too big a jump.. I just forgot the 3820 was unlocked and existed at the time and thought I'd need to move to the 3930K which is a much harder pill to swallow.. Oh well.. It's not like there's much to stress a 4.4GHz quad-core IVB now, is there... Reply
  • Death666Angel - Sunday, July 21, 2013 - link

    The 3820 is not unlocked. But it does feature base clock oc'ing. But 2 mid-high end gfx cards shouldn't be too hindered by the z77 platform. :) Reply
  • ZeDestructor - Monday, July 22, 2013 - link

    Actually it is. I found that out after I finished building... :/

    For Ivy-E, Intel is putting the K where it belongs...
  • Hung - Saturday, July 20, 2013 - link

    So who owns that cat? Or is that someone making cat noises (happened to me in Skype during LoL once). Reply
  • Kevin G - Sunday, July 21, 2013 - link

    Duskin. My guess is that its name is Zoe. Reply
  • robE - Sunday, July 21, 2013 - link

    Dustin has to relax or something :D saying "Aaaaa, aaaaa" every couple of sec can be annoying :( that aside, nice talk...hope there will be more desktop talks :D Reply
  • JlHADJOE - Sunday, July 21, 2013 - link

    +1 that. Dustin sounded rather nervous, and aside from the aaahs and uhms was really eating some of his words. Should be better by the next podcast. Reply
  • Coup27 - Sunday, July 21, 2013 - link

    I actually thought the audio was quite bad on this one. Dustin's audio was substantially louder and more punchier than Anand's and as a result you had to set a volume where either Dustin was a little too loud or Anand a little too quiet. Compare that to episode 23 and Anand and Brian virtually sounded like they were sitting next to each other which was great. I do appreciate that you have different recording locations and probably different microphones, but I'm sure the audio could have been harmonised a bit better than this episode.

    Please take my criticism as constructive.
  • Graham. - Sunday, July 21, 2013 - link

    Agreed. Listened in the car and when the volume was adjusted to Anand's voice, whenever Dustin talked it was piercingly loud. Adjusted to Dustin's voice I had to struggle to hear Anand. Was never a problem in previous podcasts, just this one. Nothing some minor mixing wouldn't fix. Reply
  • klagermkii - Sunday, July 21, 2013 - link

    I think the whole death-of-the-enthusiast market is becoming a self-fulfilling prophecy. I know over the past decade I've generally waited for things to get 2x-4x faster on the CPU front before I upgraded, and that meant pretty much every two to three years. When I go onto Bench now and compare my four year old Lynnfield with a current top of the line processor I'm looking at a 50% improvement. There's just no motivation to upgrade, and this is going to further reinforce the signs Intel are seeing that they don't need to worry about the desktop market. If you bought a PC two years ago there's absolutely no reason to upgrade, and when last did we see that?

    People can talk about how the chipsets have improved, and yes there are more SATA 6Gbps and USB 3 ports, but that's what we can sort out with PCI Express cards, not with rebuilding PCs.

    Anand mentioned how the desktop shows the future of the laptop, and the laptop shows the future of mobile devices. Well right now the desktop future appears to have stalled and we're really just waiting for the laptops to ram into the back of it. My rMBP is probably the closest I've been to a desktop CPU, with the same number of cores and not significantly lower clock speed, and it just makes it harder to justify putting money into the desktop when I can just throw it at the laptop.

    I fully support the direction that Intel has taken with focusing on power consumption. I just wish they could keep the laptop - desktop separation relevant by using that power budget. Take four of their $200 chips, create some kind of single-socket MCM and sell it for $1000 where they put four of their current CPUs onto one megapackage with 16 effective cores. I'd at least have a reason to upgrade with that.
  • jebo - Sunday, July 21, 2013 - link

    Excellent podcast. Great discussion. Reply
  • amrkal - Monday, July 22, 2013 - link

    Love the podcast. Just one complaint: the audio quality leaves something to be desired. It's not bad per se, but it's also not as good as the audio quality found in shows in Dan Benjamin's 5by5 network. Check out Amplified, for example. Its audio is really crisp and would love to have the same on your podcast. Thank you. Reply
  • Sabresiberian - Monday, July 22, 2013 - link

    I'm not as disappointed with Haswell as some; for some reason I wasn't expecting more of an increase in performance where it matters to me (games), and I generally think of -E as being the overclocking enthusiast platform. There are of course issues with -E, but I think the main issues are being addressed in the Ivy Bridge-E version, and certainly will be in Haswell-E.

    That being said, I absolutely think it is important for Intel to hear that most enthusiasts are not happy with what they are doing. It isn't like AMD is totally dead and we can't go back to using their CPUs, and AMD could well catch up with them anyway, performance-wise. Intel needs to understand that while the lower end of desktop computing, the kind of person that really doesn't need much of any kind of computer anyway, is moving to mobile because it is cool and cheaper, the desktop is alive and well at the other end, and enthusiasts do drive desktop purchases in a larger way than what they spend their own money on.

    I'm not interested in smaller form factors, will buy something that uses less power but not if it won't do the same job, and really want more performance for driving the games I play at 120Hz across 3 2560x1440 (or better) displays. CPU performance of course isn't all that effects those things, but it does effect me, and more is better, still. Considering how much of the programming universe is still single-threaded, it also effects the user experience for pretty much everyone in one way or another.
  • Krysto - Tuesday, July 23, 2013 - link

    "At some point" Microsoft might be worried about cannibalization? It was the whole reason why they didn't port WP8 to tablets, and instead went with a more crippled version of Windows 8 (Windows RT), so they can charge $90 to OEM's for it, instead of $10.

    But this has backfired anyway, because $90 licenses, and the more expensive hardware needed to pair with Windows RT, which makes such tablets very uncompetitive with Android tablets, and even iPads (which tend to have better specs at the same price, too - no "retina" in Surface RT? Really, Microsoft?).
  • Kougar - Thursday, August 01, 2013 - link

    I feel desktop users got shafted with Haswell primarily because of the disabling of so many features on "K" parts, in addition to the use of basic thermalpaste. Charging more for an "overclocking" model chip that was intentionally designed to not be "OC friendly" felt like a blatant ripoff.

    K chips lose VT-d, which impacts a great many enthusiasts as VMs are pretty commonplace these days. It also loses transactional memory support, although only time will tell how much that affects end users in general. But with 32GB and 64GB desktop systems moving into the upper mainstream it is disappointing.

    End-user Turbo-mode overclocking was disabled on Haswell, which is ironic because locking all four cores to 3.9Ghz is about all the chip is good for anyway, and would've been the route I'd gone instead of buying a price-premium K-part. I am fairly sure I would have seen better performance with a non-K part with VT-d, than I am presently getting from a 4.2Ghz 4770K with VT-d disabled under VMware loads.

    The SB-E market would have actually been my chip of choice, but I can't tolerate the thought of buying a $600 hexcore that's several years out-of-date on uArch and process node...
  • nicsta - Tuesday, August 27, 2013 - link

    For me, the highlight was Dustin's maniacal laugh in reply to Anand's question "So what do you think of the new Mac Pro?" Reply

Log in

Don't have an account? Sign up now