Simulations, Memory Requirements and Dual Processors

Throughout my simulation career, it would have been easy enough to just write code, compile and simply watch it run.  But the enthusiast and speed freak within me wanted the code to go a little faster, then a little faster, until learning about types of memory and how to prioritize code became part of my standard code scenario.  There are multiple issues that all come together from all sides of the equation.

First of all, let us discuss at a high level the concept of memory caching on a pseudo-processor. 

The picture above is a loose representation of a dual core processor, with each core represented as ‘P’, Registers labeled as ‘R’, and the size of the lines is representative of the bandwidth.

The processor has access to some registers which are a high-bandwidth, low space memory store.  These registers are used to store intermediary calculation data, as well as context switching with HyperThreading.  The processor is also directly linked to an L1 (‘level 1’) cache, which is the first place the processor looks if it needs data from memory.  If the data is not in the L1 then it looks in the L2 (‘level 2’), and so on until the data is found.  Obviously the closer the data is to the processor, the quicker it can be accessed and the calculation should prove to be quicker, and thus there are large benefits to larger caches. 

In the diagram above, each processor core has its own L1 cache and L2 cache, but a shared L3 cache.  This allows each core to probe the data in L3.  What is not shown is that there are some snoop protocols designed to let each core know what is going on in another core’s L2 cache.  With data flying around it is most important to maintain cache coherency.

Take an example where we have a simulation running two threads on our imaginary processor, and each thread requires 200 kB of data.  If our L2 cache is 256 kB then the thread can easily run inside the L2 keeping data rates high.  In the event that each core needs data from the other thread, values are copied into L3 at the expense of time.

Now imagine that our processor supports HyperThreading.  This allows us to run two threads on each processor core.  We still have the same amount of hardware, but when one thread is performing a memory read or write operation, that creates a delay until the read or write operation is confirmed.  While this delay is occurring, the processor core can save the state of the first thread and move on with the second.

The downside to our new HyperThreading scenario is when we launch a program with four threads, and each thread uses 200 kB of memory.  If our L2 cache is only 256 kB, then the combined 400 kB of data spills over into our L3 cache.  This has the potential of slowing down simulations if read and write operations are very slow.  (In modern processors, a lot of the logic built into the processor is designed to move data around such that these memory operations are as quick as possible – it goes ahead and predicts which data is needed next.)

This is the simple case of a dual core processor with HyperThreading.  It gets even more complicated if you add in the concept of dual processors.

If we have two dual core processors (four cores total) with HyperThreading (eight threads), the only memory share between the processors is the main random access memory.  When a standard program launches multiple threads, there is no say in where those threads will end up – they may be run out-of-order on whatever processor core is available.  Thus if one thread needs data from another, several things may occur:

(1) The thread may be delayed until the other thread is processed
(2) The data may already be on the same processor
(3) The data may be on the other processor, which causes delays

There are many different types of simulation that can be performed, each with their own unique way of requesting memory or dealing with threads.  As mentioned in the first page of this review, even in the research group I was in, if two people wrote code to perform the same simulation, memory requirements of each could be vastly different.  This makes it even more complicated, as when moving into a multithreaded scenario the initially slower simulation might be sped up the most.

Talking About Simulations

The next few pages will talk about a different type of simulation in turn based on my own experiences and what I have coded up.  Several are based on finite-difference grid solvers (both explicit and implicit), we have a Brownian test based on six movement algorithms, an n-body simulation, and our usual compression / video editing tests.  The ones we have written for this review will be explained briefly both mathematically and in code.

Test Setup, Power Consumption, POST Time Two and Three Dimensional Explicit Finite Difference Simulations
Comments Locked

64 Comments

View All Comments

  • toyotabedzrock - Saturday, January 5, 2013 - link

    There is a large number of very smart people on Google+. You really should come join us.
  • JlHADJOE - Tuesday, January 8, 2013 - link

    Of course there are lots of smart people on G+! You're all google employees right? =P
  • Activate: AMD - Saturday, January 5, 2013 - link

    As a fellow chemist, I must say that you have to be some kind of nut to want to do computational/physical chemistry. If you need me, I'll be at the bench!

    Good article too!
  • engrpiman - Saturday, January 5, 2013 - link

    I didn't read the article in full but what I did read was top notch. I found your simulations and mathematics very interesting. I took a Physics class which was focused in writing code to run mathematical simulations . Using the given java lib. I wrote my own code to calculate PI. When I returned from the gym the program had calculated 3.1 . I then re-wrote the program from scratch and ditched the built in libs. and reran. I had 20 decimals in 30 sec it was an epic improvement.

    All in all I think your article could be very useful to me.

    Thanks for writing.
  • SodaAnt - Saturday, January 5, 2013 - link

    THIS is why I read Anandtech. I'll admit that I wasn't quite in the mood to read all the equations (I'll have to do that later), but really, these kind of reviews make my day.
  • Cardio - Saturday, January 5, 2013 - link

    Wonderful review...as always. Thanks
  • Hakon - Saturday, January 5, 2013 - link

    Hi Ian. Thanks for the nice article. I have one suggestion regarding the explicit finite difference code:

    You could try to reorder the loops such that the memory access is more cache friendly. Right now 'pos' is incremented by NX (or even NX*NY in 3D) which will generate a lot cache misses for large grids. If you switch the x and the y loop (in the 2D case) this can be avoided.
  • IanCutress - Saturday, January 5, 2013 - link

    Either way I order the loops, each point has to read one up, one down, one left and one right. My current code tries to keep three as consecutive reads and jump once, keeping the old jump in local memory. If I adjusted the loops, I could keep the one dimension in local memory, but I'd have to jump outside twice (both likely cache misses) to get the other data. I couldn't cache those two values as I never use them again in the loop iteration.

    When I did this code on the GPU, one method was to load an XY block into memory and iterate in the Z-dimension, meaning that each thread per loop iteration only loaded one element, with a few of them loading another for the halo, but all cache aligned.

    I hope that makes sense :)
    Ian
  • Hakon - Saturday, January 5, 2013 - link

    Yes, but when you access the array 'cA' at 'pos' the CPU will fetch the entire cache line (64 byte in case of your machine, i.e. 16 floats) of the corresponding memory address into the CPU cache. That means that subsequent accesses to say 'pos + 1', 'pos + 2' and so on will be served by the cache. Accessing an array in such a sequential manner is therefore fast.

    However, when you access an array in a nasty way, e.g. 'NX + x' -> '2*NX + x', -> '3*NX + x', then each such access implies a trip to main memory if NX happens to be large.

    That you need to move up / down and sideways in memory does not matter. When you write down the accesses of the code with the reordered loops you will notice that they just access three "lines" in memory in a cache friendly way.

    Not reusing the old values of the last iteration should not affect performance in a measurable way. Even if the compiler fails to see this optimization, the accesses will be served by the L1 cache.

    Btw, did you allocate the array having NUMA in mind, i.e. did you initialize your memory in an OpenMP loop with the same access pattern as used in the algorithms? I am a bit surprised by the bad performance of your dual Xeon system.
  • IanCutress - Saturday, January 5, 2013 - link

    Memory was allocated via the new command as it is 1D. When using a 2D array the program was much slower. I was unaware you could allocate memory in an OpenMP way, which thinking about it could make the 2D array quicker. I also tried writing the code using the PPL and lambdas, but that was also slower than a simple OpenMP loop.

    I'm coming at these algorithms from the point of view of a non-CompSci interested in hardware, and the others in the research group were chemists content to write single threaded code on multi-core machines. Transferring the OpenMP variations of that code from a 1P to a 2P, as the results show, give variable results depending on the algorithm.

    There are always ways to improve the efficiency of the code (and many ways to make it unreadable), but for a large part moving to the 2P system all depends on how your code performs. Please understand that my examples being within my limits of knowledge and representative of the research I did :) I know that SSE2/SSE4/AVX would probably help, but I have never looked into those. More often than not, these environments are all about research throughput, so rather than spend a few week to improve efficiency by 10% (or less), they'd rather spend that money getting a faster system which theoretically increases the same code throughput 100%.

    I'll have a look at switching the loops if I write an article similar to this in the future :)

    Ian

Log in

Don't have an account? Sign up now