The Bifrost Core: Decoupled

Finally moving up to the 500ft view, we have the logical design of a single Bifrost core. Augmenting the changes we’ve discussed so far at the quad/execution engine level, ARM has made a number of changes to how the rest of the architecture works, and how all of this fits together as a whole.

First and foremost, a single Bifrost core contains 3 quad execution engines. This means that a single core is at any time executing up to 12 FMAs, spread over the aforementioned 3 quads. These quads are in turn fed by the core’s thread management frontend (now called a Quad Manager), which combined with the other frontends issues work to all of the functional units throughout the core.

As we’ve now seen the quad execution engines, insightful readers might have noticed that the execution engines are surprisingly sparse. They contain ALUs, register files, and little else. In most other architectures – including Midgard – there are more functional units organized within the execution engines, and this is not the case for Bifrost. Instead the load/store unit, texture unit, and other units have been evicted from the execution engines and placed as separate units along the control fabric.

Along with the shift from ILP to TLP, this is one of the more significant changes in Bifrost as compared to Midgard. Not unlike the TLP shift then, much of this change is driven by resource utilization. These units aren’t used as frequently as the ALUs, and this is especially the case as shader programs grow in length. As a result rather than placing this hardware within the execution engines and likely having it underutilized, ARM has moved them to separate units that are shared by the whole core.

The one risk here is now that there’s contention for these resources, but in practice it should not be much of an issue. Comparatively speaking, this is relatively similar to NVIDIA’s SMs, where multiple blocks of ALUs share load/store and texture units. Meanwhile this should also simplify core design a bit; only a handful of units have L2 cache data paths, and all of those units are now outside of execution engines.

Overall these separated units are not significantly different from their Midgard counterparts, and the big change here is merely their divorce from the execution engines. The texture unit, for example, still offers the same basic feature sets and throughput as Midgard’s, according to ARM.

Meanwhile something that has seen a significant overhaul compared to Midgard is ARM’s geometry subsystem. Bifrost still uses hierarchical tiling to bin geometry into tiles to work on it. However ARM has gone through quite a bit of effort here to reduce the memory usage of the tiler, as high resolution screens and higher geometry complexity was pushing up the memory usage of the tiler, and ultimately hurting performance and power efficiency.

Bifrost implements a much finer grained memory allocation system, one that also does away entirely with minimum allocation requirements. This keeps memory consumption down by reducing the amount of overhead from otherwise oversized buffers.

But perhaps more significant is that ARM has implemented a micro-triangle discard accelerator into Bifrost. By eliminating sub-pixel triangles that can’t be seen early on, ARM no longer needs to store those tringles in the tiler, further reducing memory needs. Overall, ARM is reporting that Bifrost’s tiler changes are reducing tiler memory consumption by up to 95%.

Along similar lines, ARM has also targeted vertex shading memory consumption for optimization. New to Bifrost is a feature ARM is calling Index-Driven Position Shading, which takes advantage of some of the aforementioned tiler changes to reduce the amount of memory bandwidth consumed there. ARM’s estimates put the total bandwidth savings for position shading at around 40%, given that only certain steps of the process can be optimized.

Finally, at the opposite end of the rendering pipeline we have Bifrost’s ROPs, or as ARM labels them, the blending unit and the depth & stencil unit. While these units take a similar direction as the texture unit – there are no major overhauls here – ARM has confirmed that Bifrost’s blending unit does offer some new functionality not found in Midgard’s. Bifrost’s blender can now blend FP16 targets, whereas Midgard was limited to integer targets. The inclusion of floating point blends not only saves ARM a conversion – Midgard would have to covert FP16s to integer RGBA – but the native FP16 blend means that precision/quality should be improved as well.

FP16 blends have a throughput of 1 pixel/clock, just like integer blends, so these are full speed. On that note, Bifrost’s ROP hardware does scale with the core count, so virtually every aspect of the architecture will scale up with larger configurations. Given what Mali-G71 can scale to, this means that the current Bifrost implementation can go up to 32px/clock.

The Bifrost Quad: Replacing ILP with TLP Putting It Together: Mali-G71
Comments Locked

57 Comments

View All Comments

  • Ariknowsbest - Monday, May 30, 2016 - link

    Mainly MediaTek and Rockchip that I can remember.
  • Ryan Smith - Monday, May 30, 2016 - link

    Bingo. I haven't forgotten about the IMG crew, but in the Android space (which is really the only competitive space for GPU IP licensing) they've lost most of their market share, especially at the high-end.
  • name99 - Tuesday, May 31, 2016 - link

    However it would be interesting to know how these various features (eg primacy of SIMT rather than SIMD, coherent common address space) compare to PowerVR.
  • lucam - Tuesday, May 31, 2016 - link

    At this point I think it is a blessing that IMG has Apple as big customer; without it they would have completely lost all mobile market share.
  • Ariknowsbest - Tuesday, May 31, 2016 - link

    But it's not good to be dependent on one large customer. Maybe the emergence of VR can help them to retake market share.
  • lucam - Tuesday, May 31, 2016 - link

    Totally agree with you. PowerVr is an hell of solution, but for some reason IMG has lost his leadership in the mobile market, almost disappeared in Android.
    I wonder if IMG didn't have Apple, what could be the situation now. Maybe even worse..
  • zeeBomb - Monday, May 30, 2016 - link

    Stay frosty my friends.
  • Krysto - Monday, May 30, 2016 - link

    I guess ARM will abandon HSAIL now that SPIR-V and Vulkan are here. It probably makes sense to stop focusing on OpenCL as well, if developers can just use some other language than OpenCL with SPIR-V.
  • mdriftmeyer - Monday, May 30, 2016 - link

    One uses C99+ or C11++ in OpenCL 2.x. SPIR-V same thing. Why would I care to write in SPIR-V unless it was a requirement for portability? If I want a lower level, higher performance result I'll skip SPIR-V which bridges with OpenCL via LLVM-IR and go straight to using Clang/LLVM and OpenCL?

    Don't confuse SPIR-V with the HSA Foundation. They are solving different needs and SPIR-V doesn't address what APUs via AMD are by designed to resolve.
  • beginner99 - Tuesday, May 31, 2016 - link

    Yeah that's a bit of a bummer. For me this pretty much means HSA is DOA. No software company will invest in something HSA compatible if it only is available on AMD APUs.

Log in

Don't have an account? Sign up now