TSX

TSX or Transactional Synchronization Extensions is Intel's cache-based transactional memory system. Intel launched TSX with Haswell, but a bug threw a spanner in the works. Broadwell in turn got it right. The chicken is finally there, now it's time to enjoy the eggs. 

Faster Virtualization

Virtualization overhead is (for most people) a thing of the past. The performance overhead with bare metal hypervisors (ESXi, Hyper-V, Xen, KVM..) is less than a few percent. There is one exception however: applications where I/O dominates. And of course, the packet switching telco applications are the prime examples. Intel, VMware and the server vendors really want to convert the telcos from their Firewall/Router/VPN "black boxes" to virtual ones using Software Defined Networking (SDN) infrastructure. To that end, Intel has continued to work on reducing the virtualization performance overhead. Virtualization overhead can be described as the number of VM exits (VM stops and hypervisor takes over) times the VM exit latency. In IO intensive application, VM exits happen frequently, which in turn leads to hard to predict and high IO latency, exactly what the telco people hate.

Intel wants to conquer the telco's datacenter by turning it into a SDN

So Intel worked on both factors. So Broadwell-DP VM exit latency is once again reduced from 500 cycles to 400. 

It seems that the "ticks" also get a VM exit reduction. This slide of the Ivy Bride EP presentation gives you a very good overview of the VM exits in a network intensive application; in this case a networkd bandwidth benchmark application. 

I quote from our Ivy Bridge-EP review:

The Ivy Bridge core is now capable of eliminating the VMexits due to "internal" interrupts, interrupts that originate from within the guest OS (for example inter-vCPU interrupts and timers). The virtual processor will then need to access the APIC registers, which will require a VMexit. Apparently, the current Virtual Machine Monitors do not handle this very well, as they need somewhere between 2000 to 7000 cycles per exit, which is high compared to other exits.

The solution is the Advanced Programmable Interrupt Controller virtualization (APICv). The new Xeon has microcode that can be read by the Guest OS without any VMexit, though writing still causes an exit. Some tests inside the Intel labs show up to 10% better performance.

In summary, Intel eliminated the green and dark blue components of the VM exit overhead with APICv. Broadwell now takes on the VM exits due to the external interrupts. 

The technology on Broadwell-EP to do this is called posted interrupt. Essentially, posted interrupts enables direct interrupt delivery to the virtual machine without incurring a VM exit, courtesy of an interrupt remapping table. It is very similar to VT-D, which allowed DMA remapping thanks to the physical to virtual memory mapping table. Telco applications - among others - are very latency sensitive. Intel's Edwin Verplancke gave us one such example: before posted interrupts, a telco application had a latency varying from 4 to 47 (!) µsec, depending on the load. Posted interrupts made this a lot less variable, and latency varied from 2.4 to 5.2 µsecs.

As far as we are aware, KVM and Xen seem to have already implemented support for posted interrupts. 

Sharing Cache and Memory Resources Xeon E5 v4 SKUs and Pricing
Comments Locked

112 Comments

View All Comments

  • Casper42 - Thursday, March 31, 2016 - link

    HPE just dropped the 64GB LRDIMMs a week or two back.
    They are now exactly 2x the 32GB LRDIMM as far as List Price goes.
    LRDIMMs across the board are 31% more expensive than RDIMMs.
  • wishgranter - Tuesday, April 5, 2016 - link

    http://www.techpowerup.com/221459/samsung-starts-m...
  • wishgranter - Tuesday, April 5, 2016 - link

    While introducing a wide array of 10nm-class DDR4 modules with capacities ranging from 4GB for notebook PCs to 128GB for enterprise servers, Samsung will be extending its 20nm DRAM line-up with its new 10nm-class DRAM portfolio throughout the year.
  • nathanddrews - Thursday, March 31, 2016 - link

    Perf/W is obviously a very exciting metric for server farmers and it generally exciting from a basic technology perspective, but it's absolute performance isn't amazing. Anyway, it's not like I'll be buying one anyway. LOL
  • asendra - Thursday, March 31, 2016 - link

    This interest me in so far as this would be the updated processors in a supposedly-coming-this-year Mac Pro refresh. Not that I would personally fork that much cash, but I'm interested to see how much of a jump they will make.

    But things seam rather bleak. No wonder they decided to wait 3 years for a refresh.
  • MrSpadge - Thursday, March 31, 2016 - link

    Not sure which years you're counting in, but for the majority of us it takes 1.5 years from 09/2014 to today.
    https://en.wikipedia.org/wiki/Haswell_%28microarch...
  • asendra - Thursday, March 31, 2016 - link

    Apple didn't update the MacPros with Haswell-EP. They are still using Ivy Bridge
  • tipoo - Thursday, March 31, 2016 - link


    Wonder what they'll do on the GPU side though. Too early for next generation 14nm FF GPUs from anyone, if Nvidia was even a choice due to OpenCL politics. Another GCN 1.0 part in 2016 would be...A bag of hurt.

    Still waiting on the high end 15" rMBP to have something better than GCN 1.0...The performance, shockingly, hasn't come all that far from even my Iris Pro model. Maybe double, which is something, but I'd like larger than that to upgrade from integrated...
  • extide - Thursday, March 31, 2016 - link

    Nah, if they refresh it late this year, like in august or something like that, then 14/16nm FF GPU's will be available.

    At worst we would get GCN 1.2, but yeah it would suck to see 28nm GPU's put in there...
  • mdriftmeyer - Thursday, March 31, 2016 - link

    On what planet do you not grasp FinFET 14nm end of June from AMD?

Log in

Don't have an account? Sign up now