A Brief History of Mali

ARM as a CPU designer of course needs no introduction. The vast majority of the world’s smartphones and tablets are powered either by an ARM designed CPU or a CPU based on ARM’s instruction sets. In the world of SoC CPUs, ARM is without a doubt the 800lb gorilla.

However in the world of SoC GPUs, while ARM is a major competitor they are just one of several. In fact from a technology perspective they’re among the newest, having roots that go back over a decade but still not as far as the rest of the major competitors. This is something of a point of pride for ARM’s GPU team, and as we’ll see, results in a GPU design that’s not quite like anything else we’ve seen so far.

Like all good GPU stories, the earliest history of ARM’s GPU division goes back to the late 1990s, where what would become ARM’s GPU division was first created. Originally a research project by Norwegian University of Science and Technology, the core Mali group was spun off to form Falanx Microsystems in 2001. At first Falanx was attempting to break into the PC video card market, a risky venture in the post-3dfx era that saw several other PC GPU manufacturers get shaken out – S3, Rendition, Revolution, and Imagination among them – and ultimately a venture that crashed and burnt as Falanx was unable to raise funding.

During a period of development of which they call their “scrappy phase,” due to their limited resources Falanx pivoted from designing PC GPUs to designing SoC-class GPUs and licensing those designs to SoC integrators, a much easier field to get in to. From this change in direction came the first Mali GPUs, and ultimately Falanx’s first customers. Among these were Zoran, who used the Mali-55 for their Approach 5C SoC, which in turn ended up in a couple of notable products including LG’s Viewty. But with that said, Falanx never saw great success on their own, never landing the “big fish” that they were hoping for.


LG's Viewty, One of Mali's First Wins

Ultimately as the SoC industry began to heat up from growing cell phone sales, ARM purchased Falanx in 2006. This gave ARM a capable (if previously underutilized) GPU division to create GPUs to go along with their growing CPU business. And from a business perspective the two companies were a solid match for each other as Falanx was already in the business of licensing GPUs, so for Falanx this was largely a continuation of status quo. Though for what was becoming ARM’s GPU division this was just as much of net win as it was for ARM, since it gave the Mali team access to ARM’s engineering resources and validation, capabilities that were harder to come by as a struggling 3rd party GPU designer.

Now as a part of ARM, the Mali team released their first OpenGL ES 2.0 design in 2007, the Mali-200. Mali-200 and its immediate successors Mali-300, Mali-400, and Mali-450, were based on the team’s Utgard architecture. Utgard was a non-unified GPU (discrete pixel and vertex shaders) designed for SoC-class graphics, and over the years received various upgrades to improve performance and scalability, especially on Mali-400 where Mali products first introduced the ability to use multiple cores.

Even to this day Utgard is arguably the Mali team’s most successful GPU architecture, based in large part on the architecture’s no-frills ES 2.0 design and resulting low die space requirements. Along with driving high-end SoCs of the era, including those that have powered devices such as the Samsung Galaxy S II, Utgard remains a popular mid-range GPU to this day, with Mali-450 securing a spot just this week in Samsung’s forthcoming Galaxy S5 Mini.


Mali-450 Lives On In Samsung's Galaxy S5 Mini

Now with a team of nearly 500 people and having shipped 400 million Mali GPUs in 2013 (or as close to “shipping” as an IP licenser can get), the Mali team’s latest architecture and the subject of today’s article is Midgard. Midgard ships as the basis of ARM’s Mali-T600 and Mali-T700 family GPUs, and while it was initially introduced as a high-end architecture, it will be making a transition to cover both the high-end and mid-range through the recent introduction of more space/power/cost optimized designs. After initially replacing Utgard at the high-end last year, this year will see Utgard finally replaced in the mid-range market.

ARM’s Mali Midgard Architecture Explored Midgard: The Modern Mali
Comments Locked

66 Comments

View All Comments

  • seanlumly - Thursday, July 3, 2014 - link

    Given the "exotic" the ILP and the 128-bit VLIW SIMD, the Mali looks like an impressive performer. If a Mali T760MP10 is indeed a part fit for smartphone-level power consumption, then a quick linear-scaling -- given GFXBench scores of a T760MP4 -- would imply that such a GPU is very competitive with something like the adreno 420 and certainly impressive if scaled up further to a tablet-level power consumption. If, however, an MP10 consumes roughly as much as a K1 or GX6650, then I'm very sceptical about its competitive performance.

    I find the distribution of ALUs to memory units strange given mobile bandwidth limits. The Mali T760 in an 8-core configuration clocked at 600MHz will allow for 19.2 GB/s of load/store access (256-bits/clock: source ARM). This is quite high memory bandwidth, and an increase in GPU clocks or cores will likely yield idling load/stores units doing nothing but taking up valuable die area. Operations with high varying access, cache reads, and tile read/writes will of course make good use of these additional units, but it still seems like overkill on all but very memory-access heavy apps. ARM would know best, though I'm suspicious that so many load/store units are needed for common workloads. I would guess that in common scenarios, bandwidth to main memory would be exhausted long before all of the memory units were fully utilized on a Mali T760 of high core count (eg. MP12-16).

    Decoupling the load/stores and texture units as their own "core" may allow more appropriate scaling to fit the bandwidth of the target system. A system with an ultra-high resolution, could be endowed with more load/stores and/or texture units. A system with a lower resolution could use less and opt for more ALUs in the same space. This would be similar to big.LITTLE (different cores for different targeted workloads). In this scenario, the memory unit cores could be scaled independently of the ALU cores, perfectly tailored to the target system.
  • EdvardS - Friday, July 4, 2014 - link

    Remember that there is a cache system between those units and the SoC memory controller. Bandwidth to the cores is quite different from bandwidth to the DDR memory.
  • seanlumly - Friday, July 4, 2014 - link

    Indeed! Bandwidth to cache, tile memory, varying data, or textures would likely be more relevant with a tile-based renderer that often exploits spatial locality when processing batches of pixels. This is especially true with modern screen-space effects that do multiple dependent reads per display pixel (eg. SSAO), but are strongly confined to buffer fragments surrounding the target pixel. Such situations would value having many LS/Tex units at little penalty.

    But I do still like the idea of an independently scalable "memory" core (containing load/store/texure pipes) to complement a "math" core (containing ALUs). A high-performance system targeting a 720p display will likely consume far less bandwidth than one targeting a 4K display, and as such, it would be nice to trade LS/Tex units for more ALUs in such a case.

    Such an arrangement may also enable ARM more leeway when making predictions about a new architecture -- no doubt the Midgard arch was in development many years before it saw implementation in a retail product, which means that ARM would have had to guess trends (eg. resolution) far in advance to attain the right balance of on-chip units per core; independently scalable memory-cores would be more forgiving if the trends turned out not match the initial predictions.
  • seanlumly - Friday, July 4, 2014 - link

    Actually, I am starting to understand the motivation behind the ratio of ALUs to Memory units in a Midgard core. I notice in GFXBench 3.0 "Manhattan", that the Mali T760MP4 (Rockchip rk3288) performs incredibly well at 720p, but it's performance drops off more than proportionately as the resolution increases. This may imply that in these higher than 720p scenarios, the 4-core variant of GPU may not be able to keep up with the memory demands, as computation should scale very close to proportionally.

    Thus the 1080p offscreen score for the Mail T760 MP4 in the GFXBench 3.0 (offline) database may be misleading as the MP4 may be a bit small for this resolution, and thus the performance may be low relative to its competition. A T760 MP8 would likely more than double the performance for a doubling of the resolution, pushing something like the Mali T760 MP8 well beyond the competition, at what I suspect are similar levels of power and die-size. I predict that a T760 MP8 would get slightly north of 16fps in GFXBench 3.0 (assuming adequate bandwidth to DDR). Even an MP6 variant of the GPU (as was the case with the T628 MP6 in the Exynos 5420) should put it more-or-less on-par with the competition!

    The Mali performs even better in the GFXBench 2.7 "T-Rex" test, where a small 4-core Mali T760 MP4 surpasses the competition at 720p and even sub-720p resolutions in some instances! This is incredible. In this case, it seems that the test is more computation bound, as there is a more proportional scaling between performance and resolution.

    I hope that future GPUs consider using the T760 in higher-core-count configurations. I still like the idea of a Memory-core, though I have little doubt that a Mali GPU of evenly matched size can go toe-to-toe with the competition.
  • Frenetic Pony - Friday, July 4, 2014 - link

    Every time I read an overview of a SoC GPU I am so, so glad I don't do anything with mobile stuff. "We support tessellation! I mean, don't actually do it. Ever. But you know, it's supported."
  • kkb - Friday, July 4, 2014 - link

    How come there is no comparison with intel GPUs like the ones in Baytrail?
  • darkich - Friday, July 4, 2014 - link

    Because there is no comparison phrase.
    That GPU is completely inferior compared to latest Mali, PowerVR and Adreno architectures
  • Krysto - Friday, July 4, 2014 - link

    Word.
  • kkb - Monday, July 7, 2014 - link

    well.. I don't really agree. Please look at the AT review from last week or so.. http://www.anandtech.com/show/8197/samsung-galaxy-...
    MEMO pad is a baytrail product and definitely performs better than MALI devices.
  • darkich - Monday, July 7, 2014 - link

    Get your facts and reading skills in order.

    Firstly, the GPU in Memo Pad is definitely not definitely performing better than even the Mali T628, in fact those very tests show it trades blows with it, mostly due to much lower resolution screen.

    Secondly, do you realize that the T760 is MUCH faster than T628?

    You can see here that it is basically comparable to the Tegra K1 and even the intimidating Series 6XT doesn't trounce it.

    Rest assured that any of these three, as well as the Adreno 420, is way above the ULP HD graphics chip

Log in

Don't have an account? Sign up now