One side I like to exploit on CPUs is the ability to compute and whether a variety of mathematical loads can stress the system in a way that real-world usage might not.  For these benchmarks we are ones developed for testing MP servers and workstation systems back in early 2013, such as grid solvers and Brownian motion code.  Please head over to the first of such reviews where the mathematics and small snippets of code are available.

3D Movement Algorithm Test

The algorithms in 3DPM employ uniform random number generation or normal distribution random number generation, and vary in various amounts of trigonometric operations, conditional statements, generation and rejection, fused operations, etc.  The benchmark runs through six algorithms for a specified number of particles and steps, and calculates the speed of each algorithm, then sums them all for a final score.  This is an example of a real world situation that a computational scientist may find themselves in, rather than a pure synthetic benchmark.  The benchmark is also parallel between particles simulated, and we test the single thread performance as well as the multi-threaded performance.  Results are expressed in millions of particles moved per second, and a higher number is better.

Single threaded results:

For software that deals with a particle movement at once then discards it, there are very few memory accesses that go beyond the caches into the main DRAM.  As a result, we see little differentiation between the memory kits, except perhaps a loose automatic setting with 3000 C12 causing a small decline.

Multi-Threaded:

With all the cores loaded, the caches should be more stressed with data to hold, although in the 3DPM-MT test we see less than a 2% difference in the results and no correlation that would suggest a direction of consistent increase.

N-Body Simulation

When a series of heavy mass elements are in space, they interact with each other through the force of gravity.  Thus when a star cluster forms, the interaction of every large mass with every other large mass defines the speed at which these elements approach each other.  When dealing with millions and billions of stars on such a large scale, the movement of each of these stars can be simulated through the physical theorems that describe the interactions.  The benchmark detects whether the processor is SSE2 or SSE4 capable, and implements the relative code.  We run a simulation of 10240 particles of equal mass - the output for this code is in terms of GFLOPs, and the result recorded was the peak GFLOPs value.

Despite co-interaction of many particles, the fact that a simulation of this scale can hold them all in caches between time steps means that memory has no effect on the simulation.

Grid Solvers - Explicit Finite Difference

For any grid of regular nodes, the simplest way to calculate the next time step is to use the values of those around it.  This makes for easy mathematics and parallel simulation, as each node calculated is only dependent on the previous time step, not the nodes around it on the current calculated time step.  By choosing a regular grid, we reduce the levels of memory access required for irregular grids.  We test both 2D and 3D explicit finite difference simulations with 2n nodes in each dimension, using OpenMP as the threading operator in single precision.  The grid is isotropic and the boundary conditions are sinks.  We iterate through a series of grid sizes, and results are shown in terms of ‘million nodes per second’ where the peak value is given in the results – higher is better.

Two-Dimensional Grid:

In 2D we get a small bump over at 1600 C9 in terms of calculation speed, with all other results being fairly equal.  This would statistically be an outlier, although the result seemed repeatable.

Three Dimensions:

In three dimensions, the memory jumps required to access new rows of the simulation are far greater, resulting in L3 cache misses and accesses into main memory when the simulation is large enough.  At this boundary it seems that low CAS latencies work well, as do memory speeds > 2400 MHz.  2400 C12 seems a surprising result.

Grid Solvers - Implicit Finite Difference + Alternating Direction Implicit Method

The implicit method takes a different approach to the explicit method – instead of considering one unknown in the new time step to be calculated from known elements in the previous time step, we consider that an old point can influence several new points by way of simultaneous equations.  This adds to the complexity of the simulation – the grid of nodes is solved as a series of rows and columns rather than points, reducing the parallel nature of the simulation by a dimension and drastically increasing the memory requirements of each thread.  The upside, as noted above, is the less stringent stability rules related to time steps and grid spacing.  For this we simulate a 2D grid of 2n nodes in each dimension, using OpenMP in single precision.  Again our grid is isotropic with the boundaries acting as sinks.  We iterate through a series of grid sizes, and results are shown in terms of ‘million nodes per second’ where the peak value is given in the results – higher is better.

2D Implicit:

Despite the nature if implicit calculations, it would seem that as long as 1333 MHz is avoided, results are fairly similar.  1866 C8 being a surprise outlier.

Memory Scaling on Haswell: CPU Real World Memory Scaling on Haswell: IGP Compute
Comments Locked

89 Comments

View All Comments

  • tynopik - Thursday, September 26, 2013 - link

    colors reversed on USB 3.0 Copy Test chart where green is given to the highest (worst) results and red is given to the lowest (best) results
  • Tegeril - Thursday, September 26, 2013 - link

    These are the most colorblind-unfriendly images I've seen to date on this site.
  • Razorbak86 - Friday, September 27, 2013 - link

    You tell 'em, bro! Too bad he didn't put actual NUMBERS in the cells, instead of all those non-readable colors. ;)
  • QChronoD - Friday, September 27, 2013 - link

    Please redo the IGP gaming benchmarks with playable settings. All you did was waste your time testing at unreasonably high detail and not proven a single thing about whether the extra bandwidth is able to help increase performance.
  • pdjblum - Friday, September 27, 2013 - link

    Awesome work. Man, this must have taking forever, even with fast memory. Thanks so much.
  • adityarjun - Friday, September 27, 2013 - link

    CAS Latency is given as 6-7-8-9-10-11. What does that mean?
    http://www.flipkart.com/transcend-jetram-ddr3-8-gb...

    Any help on which of these would be better and why?
    http://www.flipkart.com/computers/computer-compone...
  • anton68 - Friday, September 27, 2013 - link

    It'd be nice to see how the Iris Pro eDRAM affects compute performance when used as an L4 cache.
  • pjdaily - Saturday, September 28, 2013 - link

    I'd like to see this test too.
  • MadAd - Friday, September 27, 2013 - link

    Hi Ian, thanks for the review, could you explain the thinking behind using only 1360x768 for the gaming tests, especially for the single card benchmarks? Would stretching the single card with a memory intensive game at a high resolution change the results more towards IGPU fractions?

    This is more the scenario I would expect gamers to be facing and even if the answer turns out to be no, that in itself would be valuable data to learn.
  • merikafyeah - Friday, September 27, 2013 - link

    Please, please, please incorporate some ramdisk benchmarks for these memory tests. It seems like such a given but no one seems to think of this, which is essentially the only test where you'll see some major differences between speed tiers. Things like gaming don't really result in differences worth your money.

    I recommend Primo Ramdisk for its rock-solid stability but if you're looking for a free alternative I recommend SoftPerfect RAM Disk, which has been noted to be significantly faster than Primo, but may not be as stable under certain circumstances.

Log in

Don't have an account? Sign up now