WiFi & IO

The new iMacs join the 2013 MacBook Airs in supporting 802.11ac. Unlike the MBA implementation however, the iMac features a 3 antenna/3 stream configuration with the potential for even higher performance. Connected to Apple’s new 802.11ac Airport Extreme I was able to negotiate the maximum link rate of 1300Mbps. I will say that maintaining the full speed connection was quite tricky and required very close proximity to the AP, and that the AP was located physically higher than the iMac.

Range was absolutely incredible on the Airport Extreme/2013 iMac combination. I didn’t have time to map out speed vs. distance from AP before leaving on my most recent trip, but I will say that the combination of the two gave me better WiFi range/performance than any other wireless device I’ve ever tested. I need to spend some more time with the two but color me completely impressed at this point.

With OS X 10.8.5 Apple addressed some of the performance issues that plagued real world use of 802.11ac. Prior to the 10.8.5 update,I could get great performance using iPerf, but actually copying files between Macs on the same network never substantially exceeded the performance I could get over 802.11n.

The 10.8.5 update somewhat addressed the problem, raising average performance copying over an AFP share to ~330Mbps. It’s not unusual for software companies to only partially address an issue in existing software, especially if there’s an actual fix coming just around the corner. I had a suspicion that’s what was going on here so I threw OS X 10.9 (Mavericks) on both the iMac and my source machine, a 13-inch MacBook Pro with Retina Display.

The 13-inch rMBP was connected over Thunderbolt/GigE, while the iMac was connected over 802.11ac to the same network. First, let’s look at UDP and TCP performance using iPerf:

WiFi Performance

Peak UDP performance is 829.8Mbps. Running the same test using TCP drops performance down to 553Mbps. What about actual file copy performance? I saw peak performance as high as 720Mbps, but average file copy speed over my network setup was ~500Mbps.

You can definitely get better transfer speeds over wired Gigabit Ethernet, but 802.11ac (particularly over short distances) is very good. You’ll need to wait for Mavericks to really enjoy this performance, but the wait is almost over.

The rest of the IO is the same as in last year's model. You get four USB 3.0 ports, two Thunderbolt 1.0 ports, GigE, SD card reader, and a 1/8" jack:

The Chassis

Last year Apple redesigned the iMac, making it thinner at the edges than an iPhone 5/5s or even an iPad mini. Many pointed out that reducing edge thickness didn’t really matter all that much given the center of the iMac bulges out quite a bit. Given that there’s no internal battery you need more space for, reducing chassis volume is purely an exercise in design with no real tradeoffs as long as you can adequately cool what’s inside. I can’t speak to the 21.5-inch iMacs with discrete graphics, but the 65W Haswell + Crystalwell model I was sampled exhibited no thermal issues during even heavy use.

The iMac’s lone internal fan hummed along at ~1400 RPM during light use as well as during repeated Cinebench R15 runs while writing this review. One positive side effect of Intel targeting notebooks for all of its microprocessor architectures is the ease of cooling these 65W “desktop” parts. Keep in mind that Apple delivers a similar amount of performance in a very thin 15-inch notebook chassis as it does in a 21.5-inch iMac chassis.

Despite the reduction in internal volume, the redesigned 27-inch iMac is still a bit bulky to move around. The same can’t be said for the 21.5-inch model however. Weighing only 12 pounds (the equivalent of a small dog or large cat), the 21.5-inch iMac is almost portable. I had to carry it around a lot during the course of my review (between desks, photo area, and in testing WiFi) and I quickly appreciated just how compact this system is. Particularly in its default configuration, there’s only a single cable you have to deal with: the carefully angled power cable going into the machine.

It’s also neat to look at the iMac compared to one of my 24-inch CPU testbed monitors from a few years ago and realize that the two have virtually the same resolution, and the iMac is not only a better display but comes with an integrated Haswell PC as well.

The Display Final Words
Comments Locked

127 Comments

View All Comments

  • rootheday3 - Monday, October 7, 2013 - link

    I don't think this is true. See the die shots here:
    http://wccftech.com/haswell-die-configurations-int...

    I count 8 different die configurations.

    Note that the reduction in LLC (CPU L3) on Iris Pro may be because some of the LLC is used to hold tag data for the 128MB of eDRAM. Mainstream Intel CPUs have 2MB of LLC per CPU core, so the die has 8MB of LLC natively. The i7-4770R has all 8MB enabled but 2MB for eDRAM tag ram leaving 6MB for the CPU/GPU to use directly as cache (how it is reported on the spec sheet). The i5s generally have 6MB natively (for either die recovery and/or segmentation reasons) but if 2MB is used for eLLC tag ram, that leaves 4 for direct cache usage.

    Given that you get 128MB of eDRAM in exchange for the 2MB LLC consumed as tag ram, seems like a fair trade.
  • name99 - Monday, October 7, 2013 - link

    HT adds a pretty consistent 25% performance boost across an extremely wide variety of benchmarks. 50% is an unrealistic value.

    And, for the love of god, please stop with this faux-naive "I do not understand why Intel does ..." crap.
    If you do understand the reason, you are wasting everyone's time with your lament.
    If you don't understand the reason, go read a fscking book. Price discrimination (and the consequences thereof INCLUDING lower prices at the low end) are hardly deep secret mysteries.

    (And the same holds for the "Why oh why do Apple charge so much for RAM upgrades or flash upgrades" crowd. You're welcome to say that you do not believe the extra cost is worth the extra value to YOU --- but don't pretend there's some deep unresolved mystery here that only you have the wit to notice and bring to our attention; AND on't pretend that your particular cost/benefit tradeoff represents the entire world.

    And heck, let's be equal opportunity here --- the Windows crowd have their own version of this particular fool, telling us how unfair it is that Windows Super Premium Plus Live Home edition is priced at $30 more than Windows Ultra Extra Pro Family edition.

    I imagine there are the equivalent versions of these people complaining about how unfair Amazon S3 pricing is, or the cost of extra Google storage. Always with this same "I do not understand why these companies behave exactly like economic theory predicts; and they try to make a profit in the bargain" idiocy.)
  • tipoo - Monday, October 7, 2013 - link

    Wow, the gaming performance gap between OSX and Windows hasn't narrowed at all. I had hoped, two major OS releases after the Snow Leopard article, it would have gotten better.
  • tipoo - Monday, October 7, 2013 - link

    I wonder if AMD will support OSX with Mantle?
  • Flunk - Monday, October 7, 2013 - link

    Likely not, I don't think they're shipping GCN chips in any Apple products right now.
  • AlValentyn - Monday, October 7, 2013 - link

    Look up Mavericks, it supports OpenGL4.1, while Mountain Lion is still at 3.2

    http://t.co/rzARF6vIbm

    Good overall improvements in the Developer Previews alone.
  • tipoo - Monday, October 7, 2013 - link

    ML supports a higher OpenGL spec than Snow Leopard, but that doesn't seem to have helped lessen the real world performance gap.
  • Sm0kes - Tuesday, October 8, 2013 - link

    Got a link with real numbers?
  • Hrel - Monday, October 7, 2013 - link

    The charts show the Iris Pro take a pretty hefty hit any time you increase quality settings. HOWEVER, you're also increasing resolution. I'd be interested to see what happens when you increase resolution but leave detail settings at low-med.

    In other words, is the bottleneck the processing power of the GPU (I think it is) or the memory bandwidth? I suspect we could run Mass Effect or something similar at 1080p with medium settings.
  • Kevin G - Monday, October 7, 2013 - link

    "OS X doesn’t seem to acknowledge Crystalwell’s presence, but it’s definitely there and operational (you can tell by looking at the GPU performance results)."

    I bet OS X does but not in the GUI. Type the following in terminal:

    sysctl -a hw.

    There should be line about the CPU's full cache hierarchy among other cache information.

Log in

Don't have an account? Sign up now