HyperLane Technology

Another new addition to the A-Series GPU is Imagination's “HyperLane” technology, which promises to vastly expand the flexibility of the architecture in terms of multi-tasking as well as security. Imagination GPUs have had virtualization abilities for some time now, and this had given them an advantage in focus areas such as automotive designs.

The new HyperLane technology is said to be an extension to virtualization, going beyond it in terms of separation of tasks executed by a single GPU.

In your usual rendering flows, there are different kinds of “master” controllers each handling the dispatching of workloads to the GPU; geometry is handled by the geometry data master, pixel processing and shading by the 3D data master, 2D operations are handled by the 2D data, master, and compute workloads are processed by the, you guessed it, the compute data master.

In each of these processing flows various blocks of the GPU are active for a given task, while other blocks remain idle.

HyperLane technology is said to be able to enable full task concurrency of the GPU hardware, with multiple data masters being able to be active simultaneously, executing work dynamically across the GPU’s hardware resources. In essence, the whole GPU becomes multi-tasking capable, receiving different task submissions from up to 8 sources (hence 8 HyperLanes).

The new feature sounded to me like a hardware based scheduler for task submissions, although when I brought up this description the Imagination spokespeople were rather dismissive of the simplification, saying that HyperLanes go far deeper into the hardware architecture, with for example each HyperLane having being able to be configured with its own virtual memory space (or also sharing arbitrary memory spaces across hyperlanes).

Splitting GPU resources can happens on a block-level concurrently with other tasks, or also be shared in the time-domain with time-slices between HyperLanes. Priority can be given to HyperLanes, such as prioritizing graphics over a possible background AI task using the remaining free resources.

The security advantages of such a technology also seem advanced, with the company use-cases such as isolation for protected content and rights management.

An interesting application of the technology is the synergy it allows between an A-Series GPU and the company’s in-house neural network accelerator IP. It would be able to share AI workloads between the two IP blocks, with the GPU for example handling the more programmable layers of a model while still taking advantage of the NNA’s efficiency for the fixed function fully connected layer processing.

Three Dozen Other Microarchitectural Improvements

The A-Series comes with other numerous microarchitectural advancements that are said to be advantageous to the GPU IP.

One such existing feature is the integration of a small dedicated CPU (which we understand to be RISC-V based) acting as a firmware processor, handling GPU management tasks that in other architectures might be still be handled by drivers on the host system CPU. The firmware processor approach is said to achieve more performant and efficient handling of various housekeeping tasks such as debugging, data logging, GPIO handling and even DVFS algorithms. In contrast as an example, DVFS for Arm Mali GPUs for example is still handled by the kernel GPU driver on the host CPUs.

An interesting new development feature that is enabled by profiling the GPU’s hardware counters through the firmware processor is creating tile heatmaps of execution resources used. This seems relatively banal, but isn’t something that’s readily available for software developers and could be extremely useful in terms of quick debugging and optimizations of 3D workloads thanks to a more visual approach.

Fixed Function Changes & Scalability PPA Projections - Significant, If Delivered
Comments Locked

143 Comments

View All Comments

  • The_Assimilator - Tuesday, December 3, 2019 - link

    So what you're saying is that this "fastest GPU IP ever created" has theoretical throughput figures that are lower than a two-generation-old midrange desktop parts.

    Man, it's gonna be exciting when this is released and it's total unmitigated shite, like every mobile GPU ever.
  • ET - Wednesday, December 4, 2019 - link

    For me a more useful comparison point is the consoles. Xbox One S is 1.4 TFLOPS, PS4 is 1.84 TFLOPS, and, more to the point, Switch supposedly reaches 1 TFLOPS for 16 bit at maximum, but in practice, and for 32 bit, it's around 400 GFLOPS (when docked).

    So in theory the AXT-64-2048 could make for quite a decent low power console chip, and a good upgrade venue for Nintendo.

    (Sure, Xbox and PS have moved a little forward since then, and will move more next year, but, as an owner of a One S, I still find it quite impressive what can be achieved with this kind of GPU power.)
  • mode_13h - Wednesday, December 4, 2019 - link

    Nintendo Switch uses the Tegra X1, which was made to be a high-end tablet SoC. So, by extension, it's not surprising that a modern candidate for that application would potentially be a worthy successor for the Switch.

    Speaking of set top consoles, you're citing 2013-era models (okay, the One S is more recent, but really a small tweak on the original spec). If you instead look at the PS4 Pro and One X, then you'll see that the set top consoles have moved far beyond this GPU.
  • Lolimaster - Tuesday, December 3, 2019 - link

    The just lost it, now even qorse with amd makibg its return to arm socs.
  • melgross - Tuesday, December 3, 2019 - link

    Imagination was in trouble for a long time. The reason Apple, and Microsoft before that, left, was because Imagination refused to go along with requests from both companies for custom IP. Apple, for example needed more work on AI and ML. Imagination refused to work on that for them, which was a major mistake, as Apple was half their business, and generating more than half of their profit.

    When Apple announced they were developing their own GPU, they said that within two years they would no longer be using any Imagination IP. Imagination confirmed that. The assumption there was that older SoCs that Apple would continue to use for other devices would still incorporate the IP until they had been superseded by newer versions.

    It’s believed that newer Apple SoCs contain no Imagination IP.

    It’s interesting to see that this new Imagination IP seems to be close to what Apple wanted, but what Imagination refused to give them. A fascinating turnabout. Now it remains to be seen whether this serious improvement upon their older IP is really competitive with the newest IP from others, when it actually is in production, assuming it will really be used.
  • Andrei Frumusanu - Tuesday, December 3, 2019 - link

    > When Apple announced they were developing their own GPU, they said that within two years they would no longer be using any Imagination IP. Imagination confirmed that.

    The only thing Imagination confirmed is that Apple told them that. Ironically all those press releases and all official mentions of this have disappeared from both companies, which is essentially a sign that the two companies burried the hatchets and they came under some form of agreement.

    > It’s believed that newer Apple SoCs contain no Imagination IP.

    Well no, we're still here two years later. Apple's GPUs still very much look like PowerVR GPUs with similar block structures, they are still using IMG's proprietary TBDR techniques, and even publicly expose proprietary features such as PVRTC. Saying Apple GPUs contain none of IMG's IP is just incompetent on the topic.
  • melgross - Tuesday, December 3, 2019 - link

    Well, I’m going by what Apple themselves have said. So if you think they’re lying, good for you. But I’ll take their statements as fact first.
  • Qasar - Tuesday, December 3, 2019 - link

    just like you seem to do with intel ???
  • mode_13h - Wednesday, December 4, 2019 - link

    You saw that Andrei worked there 'till 2017, right? So, yeah, go ahead and argue with him. You're obviously the expert, here.
  • Korguz - Wednesday, December 4, 2019 - link

    mode_13h, of course he is. he believes all the lies and BS that intel is also saying....

Log in

Don't have an account? Sign up now