A Deep Dive on HSA

For our look into Heterogeneous System Architecture (HSA), Heterogeneous Unified Memory Architecture (hUMA) and Heterogeneous Queuing (hQ), our resident compute expert Rahul Garg steps up to the plate to discuss the implication, implementation and application of such concepts:

GPUs are designed for solving highly parallel problems - for example, operations on matrices are highly parallel and usually well-suited to GPUs. However, not every parallel problem is suitable for GPU compute. Currently using GPUs for most problems requires copying data between the CPU and GPU - for discrete GPUs, typically an application will copy data from system RAM to the GPU memory over the PCIe bus, do some computation and then send the results back over the PCIe bus when the computation is complete. For example, matrix addition is a highly parallel operation that has been well documented and optimized for parallelism on CPUs and GPUs alike, but depending on the structure of the rest of the application, may not be suitable for GPU acceleration if the copy demands over the PCIe bus are strenuous to the overall speed of the application. In this example, the data transfer time alone will often be more expensive than doing the matrix addition on the CPU. The data copy between the CPU and the GPU also introduces complexity in software development.

There are two reasons for the necessity of fast data transfer today. First, GPU compute currently usually implies using a discrete GPU attached to the system over the PCIe bus. Discrete GPUs have their own onboard RAM, usually in the range of 1GB to 4GB or up to 12GB on some recent server-oriented cards. The onboard RAM may have a high bandwidth (say GDDR5) in order to hide latency between thread context switching, sometimes cited as the holy-grail in high memory access compute tasks. In this setup, even if we assume that the GPU could read/write data in system RAM over PCIe bus, it is often more efficient to just transfer all relevant data to this onboard RAM once only, and let the compute kernels read/write data from the onboard GPU RAM instead of attempting to read/write data slowly over PCIe. The second reason for data copies is that the CPU and the GPU have distinct address spaces. Before HSA, they could not make sense of each other's address spaces. Even integrated GPUs, which do physically share the same memory, did not have the necessary bookkeeping machinery necessary for a unified address space. HSA addresses this second scenario.

People have been trying many techniques to avoid the data transfer overhead. For example, you can try to do data transfers in parallel with some other computation on the CPU so that computation and communication overlap. In some cases, the CPU can be bypassed altogether, for example by transferring some file data from a PCIe SSD directly to the GPU using GPUDirect. However, such techniques are not always applicable and require a lot of effort from the programmer. Ultimately, true shared memory between CPU and GPU is the way to go for many problems though discrete GPUs with their onboard high-bandwidth RAM will shine on many other problems despite the data-copy overhead.

Unified memory: State of the art before HSA

The terms "shared memory" or "unified memory" are actually thrown about quite frequently in the industry and can mean different things in different contexts. We examine the current state of art across platform distributors:

NVIDIA has introduced "unified memory" in CUDA. However, on current chips, it is a software-based solution that is more of a convenience for software developers and hidden behind APIs for ease of use. The price of data transfer still needs to be paid in terms of performance, and NVIDIA's tools merely hide some of the software complexity. However, NVIDIA is expected to offer true shared memory in the Maxwell generation, which will likely be integrated into the successor of Tegra K1 in 2015 or 2016.

AMD: AMD touts "zero copy" on Llano and Trinity for OpenCL programs. However, in most cases, this only provides a fast way to copy data from CPU to GPU and the ability to read data back from GPU in some limited cases. In practice, the zero copy feature has limited uses due to various constraints such as high initialization cost. For most use cases, you will end up copying data between CPU and GPU.

Intel: Intel provides some support for shared memory today on the Gen7 graphics in Ivy Bridge and Haswell exposed through OpenCL and DirectX. Intel's CPU/GPU integration is actually more impressive than Llano or Trinity from the perspective of memory sharing. However, sharing is still limited to some simple cases as it is missing pointer sharing, demand-based paging and true coherence offered in HSA and thus the integration is far behind Kaveri. I am expecting better support in Broadwell and Skylake. Intel's socketed Knights Landing (future Xeon Phi) product may also enable heterogeneous systems where both CPU and accelerator access the same memory, which might be the way forward for discrete GPUs as well (if possible).

Others: Companies like ARM, Imagination Technologies, Samsung and Qualcomm are also HSA Foundation members and probably working on similar solutions. Mali T600 and T700 GPUs expose some ability of sharing GPU buffers between CPU and GPU through OpenCL 1.1. However, I don't think we will see a full HSA stack from vendors other than AMD in the near future.

As of today, HSA model implemented in Kaveri is the most advanced CPU-GPU integration yet and offers the most complete solution of the bunch.

hUMA: Unified Memory in HSA

Now we examine the unified memory capabilities provided in HSA. The main benefits of having a heterogeneous unified memory architecture in HSA boil down to the addressable memory space. By reducing the cost of having the CPU and GPU access the same data, compute can be improved or offloaded without worrying about the expense of such an operation.

Eliminating CPU-GPU data copies: GPU can now access the entire CPU address space without any copies. Consider our matrix addition example again. In an HSA system, the copy of input data to GPU and copy of results back to CPU can be eliminated.

Access to entire address space: In addition to the performance benefit of eliminating copies, the GPU is also no longer limited to the onboard RAM as is usually the case with discrete GPUs. Even top-end discrete cards top out at about 12GB of onboard RAM currently while a CPU had the advantage of having access to potentially a much larger pool of memory. In many cases, such as scientific simulations, this would mean that the GPU can now work on much larger datasets without any special effort on the part of the programmer to somehow fit the data into GPU's limited address space. Kaveri will have access up to 32GB DDR3 memory, whereby the limiting factor is more the lack of 16GB unregistered non-ECC memory sticks on the market. The latency between the APU and the DRAM still exists however, meaning that a large L3 or eDRAM in the future might improve the scenario, especially in memory bandwidth limited scenarios and pre-empting data fetching.

Unified addressing in hardware: This is the big new piece in Kaveri and HSA that is not offered by any other system currently. Application programs allocate memory in a virtual CPU memory space and the OS maintains a mapping between virtual and physical addresses. When the CPU encounters a load instruction, it converts the virtual address to physical address and may need the assistance of the OS. The GPU also has its own virtual address space and previously did not understand anything about the CPU's address space. In the previous generation of unified memory systems like Ivy Bridge, the application had to ask the GPU driver to allocate a GPU page table for a given range of CPU virtual addresses. This worked for simple data structures like arrays, but did not work for more complicated structures. Initialization of the GPU page table also created some additional performance overhead.

In HSA, the GPU can directly load/store from CPU virtual address space. Thus, the application can directly pass CPU pointers enabling a much broader class of applications to take advantage of GPU. For example, sharing linked-lists and other pointer-based data structures is now possible without complicated hijinks from the application. There is also no driver overhead in sharing pointers, and thus provides better efficiency.

With this solution, the CPU and the GPU can access the same data set at the same time, and also perform work on the data set together. This enables developers to program for high utilization scenarios, and gets around AMD’s calculations of 800+ GFLOPs coming from a single Kaveri APU. With up to 12 compute cores, despite having to use CPU and GPU compute cores differently and with different kernels, they can at least all access the same data with zero overhead. It will still be up to the developer to manage locks and thread fences to ensure data coherency.

Demand-driven paging: We overlooked one detail in the previous discussion. Converting virtual addresses to physical addresses is a non-trivial task - the page containing the desired physical location may not actually be in memory and may need to loaded from, say, the disk and requires OS intervention. This is called demand-driven paging. GPUs prior to Kaveri did not implement demand-driven paging. Instead, the application had to know in advance what range of memory addresses it was going to access and map those to a GPU buffer object. The GPU driver then locked the corresponding pages in memory. However, in many cases the application programmer may not know the access pattern beforehand. Pointer-based data structures like linked lists, where the nodes may be pointing to anywhere in memory, were difficult to deal with. Demand-driven paging, together with unified addressing, allows sharing of arbitrary data structures between CPU and GPU and thus opens the door to GPU acceleration of many more applications.

CPU-GPU coherence: So far we have discussed how the GPU can read/write from the CPU address space without any data copies. However, that is not the full story. CPU and the GPU may want to work together on a given problem. For some types of problems, it is critical that the CPU/GPU be able to see each other's writes during the computation. This is non-trivial because of issues such as caches. HSA memory model provides optional coherence between the CPU and the GPU through what is called acquire-release type memory instructions. However, this coherence comes at a performance cost and thus HSA provides mechanisms for the programmer to express when CPU/GPU coherence is not required. Apart from coherent memory instructions, HSA also provides atomic instructions that allow the CPU to GPU to read/write atomically from a given memory location. These ‘platform atomics’ are designed to do as regular atomics, i.e. provide a read-modify-write operation in a single instruction without developing custom fences or locks around the data element or data set.

HSAIL: Portable Pseudo-ISA for Heterogeneous Compute

The HSA Foundation wants that the same heterogeneous compute applications run on all HSA-enabled systems. Thus, they needed to standardize the software interface supported by any HSA-enabled system. HSA foundation wanted a low-level API to the hardware that can be targeted by compilers of different languages. Typically compilers target the instruction-set of a processor. However, given the diversity of hardware being targeted by HSA (CPUs, GPUs, DSPs and more), standardizing on an instruction-set was not possible. Instead, HSA Foundation has standardized on a pseudo-instruction set called HSAIL. HSAIL stands for HSA Intermediate Language. The idea is that the compiler for a high-level language (like OpenCL, C++ AMP or Java) will generate HSAIL and the HSA driver will generate the actual binary code using just-in-time compilation. The idea of a pseudo-ISA has been used in many previous portable technologies such as Java bytecode and the Direct3D bytecode. HSAIL is low-level enough to expose many details of the hardware and has been carefully designed such that the conversion from HSAIL to binary code can be very fast.

In terms of competition, Nvidia provides PTX which has similar goals to HSAIL in terms of providing a pseudo instruction set target for compilers. PTX is only meant for Nvidia systems, though some research projects do provide alternate backends such as x86 CPUs. HSAIL will be portable to any GPU, CPU or DSP that implements HSA APIs.

hQ: Optimized Task Queuing in HSA

Heterogeneous Queuing is an odd term for those not in the know. What we have here is a series of code that calls upon another function during its processing, one which requires the performance of another device. Say for example I am running a mathematical solver, and part of the code that runs on the GPU requires CPU assistance in computing. The way the threads are handled is coined ‘heterogeneous queuing’. HSA brings three new capabilities in this context to Kaveri and other HSA systems compared to previous-gen APUs.

User-mode queuing: In most GPGPU APIs, the CPU queues jobs/kernels for the GPU to execute. This queuing goes through the GPU driver and requires some system calls. However, in HSA the queuing can be done in user-mode which will reduce the overhead of job dispatch. The lower-latency of dispatch will make it feasible to efficiently queue even relatively small jobs to the GPU.

Dynamic parallelism: Typically the CPU queues work for the GPU but the GPU could not enqueue work for itself. NVIDIA introduced the capability for GPU kernels to call other GPU kernels with the GK110/Titan and named this dynamic parallelism. HSA systems will include dynamic parallelism.

CPU callbacks: With Kaveri, not only will the GPU be able to queue work for itself, it will be able to queue CPU callback functions. This capability goes beyond competitor offerings. CPU callbacks are quite useful in many situations, for example CPU callbacks maybe required to call system APIs that cannot run on the GPU, legacy CPU code that has not yet been ported to the GPU or may simply be code that is more complex, and thus suitable on the CPU.

Programming Tools Roadmap

Given that many users write in different languages for many different purposes, AMD has to have a multifaceted approach when it comes to providing programming tools.

Now we examine part of the software roadmap for programming tools for HSA:

Base HSA stack: Base HSA execution stack supporting HSAIL and HSA runtime for Kaveri is expected to become available in Q2 2014.

LLVM: HSAIL is only one piece of the puzzle. While many compiler writers are perfectly happy to directly generate HSAIL from their compilers, many new compilers today are built on top of toolkits like LLVM. AMD will also open-source an HSAIL code generator for LLVM, which will allow compiler vendors using LLVM to generate HSAIL with very little effort. So we may eventually see compilers for languages such as C++, Python or Julia targeting HSA based systems at some point. Along with the work being done in Clang to support OpenCL, the LLVM to HSAIL generator will also simplify the work of building OpenCL drivers for HSA-based systems. In terms of competition, NVIDIA already provides a PTX backend for LLVM.

OpenCL: At the time of launch, Kaveri will be shipping with OpenCL 1.2 implementation. My understanding is that the launch drivers are not providing HSA execution stack and the OpenCL functionality is built on top of their legacy graphics stack built on top of AMDIL. In Q2 2014, a preview driver providing OpenCL 1.2 with some unified memory extensions from OpenCL 2.0 built on top of HSA infrastructure should be released. A driver with support for OpenCL 2.0 built on top of HSA infrastructure is expected in Q1 2015.

C++ AMP: C++ AMP was pioneered by Microsoft and the Microsoft stack is built on top of DirectCompute. DirectCompute does not really expose unified memory, and even Direct3D 11.2 only takes only preliminary steps towards unified memory. Microsoft's C++ AMP implementation targets DirectCompute and thus won't be able to take full advantage of features offered by HSA enabled systems. However, C++ AMP is an open specification and other compiler vendors can write C++ AMP compilers. HSA Foundation member Multicoreware is working with AMD on providing a C++ AMP compiler that generates HSAIL for HSA enabled platforms, and OpenCL SPIR for other platforms (such as Intel).

Java "Sumatra" and Aparapi: AMD already provides an API called Aparapi that compiles annotated Java code to OpenCL. AMD will be updating Aparapi to take advantage of HSA in Java 8 and it will be ready sometime in 2014. In addition, Oracle (owner of Java) also announced a plan to target HSA by generating HSAIL from Java bytecode in their HotSpot VM. This is expected to be released with Java 9 sometime in 2015. It will be interesting to watch if IBM, which has announced a partnership with NVIDIA, will also enable HSA backends in its own JVM.

HSA Conclusion

Overall, HSA brings many new capabilities compared to the basic GPGPU compute of AMD platforms to this point. Unified memory in HSA is an extremely exciting step due to the explicit elimination of data-copies, the ability for the GPU to address large amounts of memory and the sharing of complex data structures will enable heterogeneous computing for many more problems. Ability for the GPU to queue kernels for itself, or even do CPU callbacks are both great additions as well. CPU/GPU integration in HSA is definitely a step above the competition and does have the potential of making them more equal peers than the current model. As a compiler writer, I do think that HSA systems will be easier to program for both compiler writers as well as application developers and in the long term does have the potential of making heterogeneous computing more mainstream.

However, HSA will only begin delivering when the software ecosystem takes advantage of HSA capabilities, and this will not happen overnight. While HSA is impressive at the architecture level, AMD also needs to deliver on its software commitments as soon as possible. This not only includes HSA enabled drivers, but also tools such as profilers and debuggers, good documentation and other pieces such as the LLVM backend. I would also like to see more projects for programming languages compiling to HSA, especially open source libraries which AMD are working on for their HSA implementation.

Finally, whether HSA succeeds or fails will also depend upon what direction HSA Foundation members such as Qualcomm, ARM, TI, Imagination tech and Samsung take. At this point, no one other than AMD has announced any HSA-enabled hardware products. Programmers may be reluctant to adopt HSA if it only works on AMD platforms, and there will be slow adoption to make the heterogeneous programming regimen part of the core fundamentals for new programmers. I do expect that the other HSA-enabled products from other vendors will arrive, but the timing will be of importance. In the meantime, competitors such as NVIDIA and Intel are not sitting idle either and we should see better integrated heterogeneous solutions from them soon as well. For now, AMD should be commended for moving the industry forward and for having the most integrated heterogeneous computing solution today.

Accelerators: TrueAudio DSP, Video Coding Engine, Unified Video Decoder The GPU: GCN, Mantle, Dual Graphics & More
Comments Locked

380 Comments

View All Comments

  • eanazag - Wednesday, January 15, 2014 - link

    In reference to the no FX versions, I don't think that will change. I think we are stuck with it indefinitely. From the AMD server roadmap and info in this article related to process, I believe that the Warsaw procs will be a die shrink to 12/16 because the GF 28nm process doesn't help clocks. The current clocks on the 12/16 procs already suck so they might stay the same or better because of the TDP reduction at that core count, but it doesn't benefit in the 8 core or less pile driver series. Since AMD has needed to drive CPU clock way higher to compensate for a lack of IPC and the 28 nm process hurts clocks, I am expecting to not see anything for FX at all. Only thing that could change that is if a process at other than GF would make a good fit for a die shrink. I still doubt they will be doing any more changes to the FX series at the high end.

    So to me, this might force me to consider only Intel for my next build because I am still running discrete GPUs in desktop and I want at least 8 core (AMD equivalent in Intel) performance CPUs in my main system. I will likely go with a #2 Haswell chip. I am not crazy about paying $300 for a CPU, but $200-300 is okay.

    I would not be surprised to see an FX system with 2P like the original FX. The server roadmap is showing that. This would essentially be two Kaveri's and maybe crossfire between the two procs. That sounds slightly interesting if I could ratchet up the TDP for the CPU. It does sound like a Bitcoin beast.
  • britjh22 - Wednesday, January 15, 2014 - link

    I think there are some interesting points to be made about Kaveri, but I think the benchmarks really fall short of pointing to some possibly interesting data. Some of the things I got from this:

    1. The 7850k is too expensive for the performance it currently offers (no proliferation of HSA), and the people comparing it to cheaper CPU/dGPU are correct. However to say Kaveri fails based on that particular price comparison is a failure to see what else is here, and the article does point that out somewhat.

    2. The 45W part does seem to be the best spot at the moment for price to performance, possibly indicating that more iGPU resources don't give up much benefit without onboard cache like crystalwell/Iris Pro. However, putting the 4770R in amongst the benches is no super useful due to the price and lack of availability, not to mention it not being socketed.

    3. The gaming benchmarks may be the standard for AT, but they really don't do an effective job to either prove or disprove AMD's claims for gaming performance. Plenty of people will (and have looking at the comments) say they have failed at 1080p gaming scores based on 1080p extreme settings. Even some casual experimentation to see what is actually achievable at 1080p would be helpful and informative.

    4. I think the main target for these systems isn't really being addressed by the review, which may be difficult to do in a score/objective way, but I think it would be useful. I think of systems like this, and more based off the 65W/45W parts as great mainstream parts. For that price ($100-130ish) you would be looking at an i3 with iGP, or a lower feature pentium part with a low end dGPU. I think at this level you get a lot more from your money with AMD. You have a system which one aspect will not become inadequate before the other (CPU vs GPU), how many relatives do we know where they have an older computer with enough CPU grunt, but not enough GPU grunt. I've seen quite a few where the Intel integrated was just good enough at the time of launch, but a few years down the road would need a dGPU or more major system upgrade. A system with the A8-7600 would be well rounded for a long time, and down the road could add a mid grade dGPU for good gaming performance. I believe it was an article on here that recently showed even just an A8 was quite sufficient for high detail 1080p when paired with a mid to high range card.

    5. As was referenced another review and in the comments, a large chunk of steam users are currently being served by iGPU's which are worse then this. These are the people who play MMO's, free to play games, source games, gMod games, DOTA2/LoL, indie games, and things like Hearthstone. For them, and most users that these should be aimed at, the A10-7850K (at current pricing) is not a winner, and they would probably be better (value) or equally (performance) served by the A8-7600. This is a problem with review sites, including AT, which tend to really look at the high end of the market. This is because the readership (myself included) is interested for personal decision making, and the manufacturer's provide these products as, performance wise, they are the most flattering. However, I think some of the most interesting and prolific advances are happening in the middle market. The review does a good job of pointing that out with the performance charts at 45W, however I think some exploration into what was mentioned in point #3 would really help to flesh this out. Anand's evaluation for CPU advances slowing down in his Mac Pro is a great example of this, and really points out how HSA could be a major advancement. I upgraded from a Q6600 to a 3570K, and don't see any reasons coming up to make a change any time soon, CPU's have really become somewhat stagnant at the high end of performance. Hopefully AMD's gains at the 45W level can pan out into some great APU's in laptops for AMD, for all the users for games like the above mentioned.
  • fteoath64 - Sunday, January 19, 2014 - link

    As consumers, our problem with the prices inching upwards in the mid-range is that Intel is not supplying enough models of the i3 range within the price point of AMD APU (mid to highest models). This means the prices are well segmented in the market such that they will not change giving excuse for slight increases as we have seen with Richland parts. It seems like lack of competition in the segment ranges indicate a cartel like behaviour in the x86 market.
    AMD is providing the best deal in a per transistor basis while consumers expects their cpu performance to ran on par with Intel. That is not going to happen as Intel's gpu inprovement inches closer to AMD. With HSA, the tables have turned for AMD and Intel with Nvidia certain will have to respond some time in the future. This is come when the software changes for HSA makes a significant improvement in overall performance for AMD APUs. We shall see but I am hopeful.
  • woogitboogity - Wednesday, January 15, 2014 - link

    Ah AMD... to think that in the day of thunderbird they were once the under-appreciated underdog where the performance was. The rebel against the P4 and it's unbelievably impractical pipeline architecture.

    Bottom line is Intel still needs them as anti-trust suit insurance... with this SoC finally getting off the ground is anyone else wondering whether Intel was less aggressive with their own SoC stuff as a "AMD doggy/gimp treat"? Still nice to able to recommend a processor without worrying about the onboard graphics when they are on chip.
  • Hrel - Wednesday, January 15, 2014 - link

    "do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community."

    I think as a low end Steam Box that'd be GREAT! I'm not sure the approach Valve is looking to take with steam boxes, but if there's no "build your own" option then it doesn't make sense to sell it to us. Makes a lot more sense for them to do that and just sell the entire "console" directly to consumers. Or, through a reseller, but then I become concerned with additional markup from middlemen.
  • tanishalfelven - Wednesday, January 15, 2014 - link

    You can install steamos on whatever computer you want... even one you built your self or one you already own. I'd personally think a pc based on something like this processor would be significantly less expensive (i can imagine 300 bucks) and maybe even faster. And more importantly with things like humble bundle it'd be much much cheaper in the games department...
  • tanishalfelven - Wednesday, January 15, 2014 - link

    i am wrong on faster than ps4 however, point stands
  • JBVertexx - Wednesday, January 15, 2014 - link

    As always, very good writeup, although I must confess that it took me a few attempts to get thru the HSA feel dive! Still, it was a much needed education, so I appreciate that.

    I have had to digest this, as I was initially really dissappointed at the lack of progress on the CPU front, but after reading through all the writeups I could find, I thinks the real story here is about the A8-7600 and opening up new markets for advanced PC based gaming.

    If you think about it, that is where the incentive is for game developers to develop for Mantle. Providing the capability for someone who already has or would purchase an advanced discrete GPU to play with equal performance on an APU provides zero economic incentive for game developers.

    However, if AMD can successfully open up as advanced gaming to the mass, low cost PC market, even if that performance is substandard by "enthudiast" standards, then that does provide huge economic incentive for developers, because the cost of entry to play your game has just gone down significantly, potentially opening up a vast new customer base.

    With Steam really picking up "steam", with the consoles on PC tech, and with the innovative thinking going on at AMD, I have come around to thinking this is all really good stuff for PC gaming. And it's really the only path to adoption that AMD can take. I for one am hoping they're successful.
  • captianpicard - Wednesday, January 15, 2014 - link

    I doubt Kaveri was ever intended for us, the enthusiast community. The people whom Kaveri was intended for are not the type that would read a dozen CPU/GPU reviews and then log on to newegg to price out an optimal FPS/$ rig. Instead, they would be more inclined to buy reasonably priced prebuilt PCs with the hope that they'd be able to do some light gaming in addition to the primary tasks of web browsing, checking email, watching videos on youtube/netflix, running office, etc.

    Nothing really up till now has actually fulfilled that niche, and done it well, IMO. Lots of machines from dell, HP, etc. have vast CPU power but horrendous GPU performance. Kaveri offers a balanced solution at an affordable price, in a small footprint. So you could put it into a laptop or a smart tv or all in one pc and be able to get decent gaming performance. Relatively speaking, of course.
  • izmanq - Wednesday, January 15, 2014 - link

    why put i7 4770 with discrete HD 6750 in the integrated GPU performance charts ? :|

Log in

Don't have an account? Sign up now