The Bifrost Core: Decoupled

Finally moving up to the 500ft view, we have the logical design of a single Bifrost core. Augmenting the changes we’ve discussed so far at the quad/execution engine level, ARM has made a number of changes to how the rest of the architecture works, and how all of this fits together as a whole.

First and foremost, a single Bifrost core contains 3 quad execution engines. This means that a single core is at any time executing up to 12 FMAs, spread over the aforementioned 3 quads. These quads are in turn fed by the core’s thread management frontend (now called a Quad Manager), which combined with the other frontends issues work to all of the functional units throughout the core.

As we’ve now seen the quad execution engines, insightful readers might have noticed that the execution engines are surprisingly sparse. They contain ALUs, register files, and little else. In most other architectures – including Midgard – there are more functional units organized within the execution engines, and this is not the case for Bifrost. Instead the load/store unit, texture unit, and other units have been evicted from the execution engines and placed as separate units along the control fabric.

Along with the shift from ILP to TLP, this is one of the more significant changes in Bifrost as compared to Midgard. Not unlike the TLP shift then, much of this change is driven by resource utilization. These units aren’t used as frequently as the ALUs, and this is especially the case as shader programs grow in length. As a result rather than placing this hardware within the execution engines and likely having it underutilized, ARM has moved them to separate units that are shared by the whole core.

The one risk here is now that there’s contention for these resources, but in practice it should not be much of an issue. Comparatively speaking, this is relatively similar to NVIDIA’s SMs, where multiple blocks of ALUs share load/store and texture units. Meanwhile this should also simplify core design a bit; only a handful of units have L2 cache data paths, and all of those units are now outside of execution engines.

Overall these separated units are not significantly different from their Midgard counterparts, and the big change here is merely their divorce from the execution engines. The texture unit, for example, still offers the same basic feature sets and throughput as Midgard’s, according to ARM.

Meanwhile something that has seen a significant overhaul compared to Midgard is ARM’s geometry subsystem. Bifrost still uses hierarchical tiling to bin geometry into tiles to work on it. However ARM has gone through quite a bit of effort here to reduce the memory usage of the tiler, as high resolution screens and higher geometry complexity was pushing up the memory usage of the tiler, and ultimately hurting performance and power efficiency.

Bifrost implements a much finer grained memory allocation system, one that also does away entirely with minimum allocation requirements. This keeps memory consumption down by reducing the amount of overhead from otherwise oversized buffers.

But perhaps more significant is that ARM has implemented a micro-triangle discard accelerator into Bifrost. By eliminating sub-pixel triangles that can’t be seen early on, ARM no longer needs to store those tringles in the tiler, further reducing memory needs. Overall, ARM is reporting that Bifrost’s tiler changes are reducing tiler memory consumption by up to 95%.

Along similar lines, ARM has also targeted vertex shading memory consumption for optimization. New to Bifrost is a feature ARM is calling Index-Driven Position Shading, which takes advantage of some of the aforementioned tiler changes to reduce the amount of memory bandwidth consumed there. ARM’s estimates put the total bandwidth savings for position shading at around 40%, given that only certain steps of the process can be optimized.

Finally, at the opposite end of the rendering pipeline we have Bifrost’s ROPs, or as ARM labels them, the blending unit and the depth & stencil unit. While these units take a similar direction as the texture unit – there are no major overhauls here – ARM has confirmed that Bifrost’s blending unit does offer some new functionality not found in Midgard’s. Bifrost’s blender can now blend FP16 targets, whereas Midgard was limited to integer targets. The inclusion of floating point blends not only saves ARM a conversion – Midgard would have to covert FP16s to integer RGBA – but the native FP16 blend means that precision/quality should be improved as well.

FP16 blends have a throughput of 1 pixel/clock, just like integer blends, so these are full speed. On that note, Bifrost’s ROP hardware does scale with the core count, so virtually every aspect of the architecture will scale up with larger configurations. Given what Mali-G71 can scale to, this means that the current Bifrost implementation can go up to 32px/clock.

The Bifrost Quad: Replacing ILP with TLP Putting It Together: Mali-G71
Comments Locked

57 Comments

View All Comments

  • Shadow7037932 - Tuesday, May 31, 2016 - link

    Mobile VR (hopefully, meaning "cheaper" VR) for starters.
  • Spunjji - Wednesday, June 1, 2016 - link

    Your lack of imagination is staggering.
  • Zyzzyx - Wednesday, June 1, 2016 - link

    This technology is going to affect millions if not billions of customers over the next few years, your 1080 will be used by a limited number of gamers, we know already about the Pascal architecture, as it has already been covered. Claiming that mobile dGPUs have no importance sounds like Ballmer saying the iphone had no market.

    You might also have missed the iPad Pro and what you can use it for, and no it is not mobile gaming ...

    Also your claim that only Windows 10 is a real OS shows your short sightedness, we will see in 5-10 years which OS will be dominating, since i am sure both Android and iOS will slowly but surely keep creeping into the professional space as features will be added.
  • Wolfpup - Friday, June 3, 2016 - link

    You sound like me LOL. Still interesting though just on a tech level, but I use PCs and game systems as they have actual games (and good interfaces to play them).
  • SolvalouLP - Monday, May 30, 2016 - link

    We all are waiting for AT review of GTX 1080, but please, this behaviour is childish at best.
  • Ryan Smith - Monday, May 30, 2016 - link

    And your hope will be rewarded.
  • makerofthegames - Tuesday, May 31, 2016 - link

    Go ahead and take as long as you need. I don't read AT for the hot takes, I read AT because you do real testing and real analysis to give us real information. I definitely want to read the review, but I'm willing to wait for it to be good.
  • edlee - Monday, May 30, 2016 - link

    It all sounds great, unfortunately I am stuck with devices that get powered by qualcomm sons, since I have verizon, and most flagship phone use qualcomm snapdragons
  • Krysto - Monday, May 30, 2016 - link

    It's a shame Samsung isn't selling its Exynos chip to other device makers, isn't it? I mean, it's probably not even economically worth it for Samsung to design a chip for only 1 or 2 of its smartphone models. I don't understand why they don't try to compete more directly with Qualcomm in the chip market. I also don't understand why they aren't buying AMD so they can compete more directly with Intel as well, but I digress.
  • Tabalan - Monday, May 30, 2016 - link

    Samsung sells certain Exynos SoC to Meizu, Meizu MX4 Pro had Exynos 5430, Pro 5 had Exynos 7420. With Pro 6 they went Mediatek X25.
    About costs of designing and profit - they used to use CPU core and GPU designed by ARM, it's cheaper this way than buying license to modify these cores (and you have to add R&D costs of modifying uarch). Moreover, Samsung is using their latest process node only for high end SoCs (Apple AX series, Snapdragon 8XX series), which is very profitable market share. It could be easier to just manufacture SoC and get cash for it than looking for partners, vendees for their SoC. Plus, they would have to create whole line up of Exynos SoC to compete with Qualcomm (I assume Qc would give discount for buying chips only from them).

Log in

Don't have an account? Sign up now