Taking place this week in San Francisco is 2014 Game Developers Conference. Though not necessarily a hardware show, in years past we have seen gaming-related product announcements in both the desktop space and the mobile space. This year is no exception, and in the mobile space in particular it looks to be a very busy year. As we saw on Monday, Khronos is at the show announcing OpenGL ES 3.1, with more announcements to come in the days that follow.

But it’s Imagination Technologies that may end up taking the spotlight for the most groundbreaking announcement. Never a stranger to GDC, Imagination is at GDC 2014 to announce their new family of GPU designs. Dubbed “Wizard”, Imagination is introducing a family of GPUs unlike anything we’ve seen before.

To get right to the point then, what is Wizard? In short, Wizard is Imagination’s long expected push for ray tracing in the consumer GPU space. Ray tracing itself is not new to the world of GPUs – it can be implemented somewhat inefficiently in shaders today – but with Wizard, Imagination intends to take it one step further. To that end Imagination has been working on the subject of ray tracing since 2010, including actions such as founding the OpenRL API and releasing dedicated ray tracing cards, and now they are ready to take the next step by introducing dedicated ray tracing hardware into their consumer GPU designs.

The end result of that effort is the Wizard family of GPUs. Wizard is essentially an extension of Imagination’s existing PowerVR Series6XT Rogue designs, taking the base hardware and adding the additional blocks necessary to do ray tracing. The end result being a GPU that is meant to be every bit as capable as Rogue in traditional rasterization rendering, but also possessing the ability to do ray tracing on top of that.

The Wizard Architecture

The first Wizard design, being introduced today, is the GR6500. The GR6500 is a variant of the existing Series6XT GX6450, possessing the same 128 ALU USCA and augmented with Imagination’s ray tracing hardware. From a high level overview there’s nothing new about GR6500’s Rogue-based rasterization hardware, and since we covered that in depth last month in our look at Rogue, we’ll dive right into Imagination’s new ray tracing blocks for Wizard.

The Wizard architecture’s ray tracing functionality is composed of 4 blocks. The Ray Tracing Data Master, the Ray Tracing Unit, the Scene Hierarchy Generator, and the Frame Accumulation Cache. These blocks are built along Rogue’s common bus, and in cases wired directly to the USC Array, allowing them to both push data onto the rasterization hardware and pull data off of it for further processing.

Starting with the Ray Tracing Data Master, this is the block responsible for feeding ray data back into the rasterization hardware. With the Ray Tracing Unit calculating the ray data and intersections, the data master then passes that data on to the rasterization hardware for final pixel resolution, as the ray data is consumed to calculate those values.

Meanwhile the Ray Tracing Unit (RTU) is the centerpiece of Imagination’s ray tracing hardware. A fixed function (non-programmable) block where the actual ray tracing work goes on, the RTU contains two sub-blocks: the Ray Interaction Processor, and the Coherency Engine. The Ray Interaction Processor in turn is where the ALUs responsible for ray tracing reside, while the Coherency Engine is responsible for keeping rays organized and coherent (close to each other) for efficient processing.

Notably, the RTU is a stand-alone unit. That is, it doesn’t rely on the ALUs in the USC Array to do any of its processing. All ray tracing work takes place in the RTU, leaving the USC Array to its traditional rasterization work and ultimately consuming the output of the RTU for final pixel resolution. This means that the addition of ray tracing hardware doesn’t impact theoretical rasterization performance, however in the real world it’s reasonable to expect that there will be a performance tradeoff from the RTU consuming memory bandwidth.

Finally we have the Scene Hierarchy Generator and Frame Accumulation Cache. Imagination’s description of these units was fairly limited, and we don’t know a great deal about them beyond the fact that the Scene Hierarchy Generator is used in dynamic object updates, while the Frame Accumulation Cache provides write-combining scattered access to the frame buffer. In the case of the latter this appears to be a block intended to offset the naturally incoherent nature of ray tracing, which makes efficient memory operations and caching difficult.


A basic model of ray tracing. Image courtesy Henrik

As we mentioned before, while the ray tracing hardware is tied into Wizard’s rasterization hardware and data buses, it fundamentally operates separately from the rasterization hardware. For GR6500 this means that at Imagination’s reference clockspeed of 600MHz it will be able to deliver up to 300 million rays per second, 24 billion node tests per second, and 100 million dynamic triangles per second. This would be alongside GR6500’s 150 GFLOPs of FP32 computational throughput in the USC Array. Of course like all PowerVR designs, this is a reference design licensed as IP to SoC designers, and as such the actual performance for a GR6500 will depend on how high the shipping clockspeed is in any given SoC.

With that in mind, while we already know a fair bit about Wizard’s design, Imagination isn’t saying a lot about some of the finer implementation details at this time. At a low level we don’t know much about how the RTU’s ALUs are organized and how they gather rays, for example. And at a high level we don’t know what the die size or power consumption cost is for implementing this ray tracing hardware, and consequently how GR6500 compares to GX6430 and GX6650. Unquestionably it’s bigger and more power hungry, we just don’t know to what degree.

Meanwhile it is worth pointing out that Imagination is targeting GR6500 in the same mobile/embedded SoC space as the Rogue GPUs, so it’s clear that the power requirements in particular can’t be too much higher than the rasterization-only Rogue hardware. With GR6500 just now being announced and the typical SoC development cycle putting shipping silicon into 2015 (or later), these are blanks that we expect we’ll be able to fill in as GR6500 gets closer to shipping.

Why Ray Tracing?

With the matter of the Wizard architecture out of the way, this brings us to our second point: why ray tracing?

The short answer is that there are certain classes of effects that can’t easily be implemented on traditional rasterization hardware. The most important of these effects fall under the umbrellas of lighting/shadows, transparencies, and reflections. All of these effects can be implemented today in some manner on traditional rasterization based processes, but in those cases those rasterization-based implementations will come with some sort of drawback, be it memory, geometry, performance, or overall accuracy.

The point then of putting ray tracing hardware in a PowerVR GPU is to offer dedicated, fixed function hardware to accelerate these processes. By pulling them out of the shaders and implementing superior ray tracing based algorithms, the quality of the effect should go up and at the same time the power/resource cost of the effect may very well go down.

The end result being that Imagination is envisioning games that implement a hybrid rendering model. Rasterization is still the fastest and most effective way to implement a number of real-time rendering steps – rasterization has been called the ultimate cheat in graphics, and over the years hardware and software developers have gotten very, very good at this cheating – so the idea is to continue to use rasterization where it makes sense, and then implementing ray tracing effects on top of scene rasterization when they are called for. The combination of the two forming a hybrid model that maintains the benefits of rasterization while including the benefits of ray tracing.

On the other hand, Imagination is also being rather explicit in that they are pursuing the hybrid model instead of a pure ray tracing model. Pure ray tracing is often used in “offline” work – rendering CGI for movies, for example – and while other vendors such as Intel have pursued real time ray tracing, this is not the direction Imagination intends to go. Their ray tracing hardware is fixed function, making it small and powerful but also introducing limitations that make it less than ideal for pure ray tracing; a solid match for the hybrid model the company is pursuing.

First Thoughts

However of all the questions Imagination’s ray tracing technology raises, the biggest one is also the simplest one: will Wizard and ray tracing be successful? From a high level overview Imagination’s ideas appear to be well thought out and otherwise solid, but of course we are hardly game developers and as such aren’t in a great position to evaluate just how practical implementing a hybrid rendering model would be. It’s a safe bet that we’re not going to see pure ray tracing for real time rendering any time soon, but what Imagination presents with their hybrid model is something different than the kinds of ray tracing solutions presented before.

From a development point of view, to the benefit of Imagination’s efforts they already have two things in their favor. The first is that they’ve lined up the support of Unity Technologies, developers of the Unity game engine, to support ray tracing. Unity 5.x will be able to use Imagination’s ray tracing technology to accelerate real time lighting previews in editor mode, thereby allowing developers to see a reasonable facsimile of their final (baked) lighting in real time. Meanwhile some game developers already implement ray tracing-like techniques for some of their effects, so some portion of developers do have a general familiarity with ray tracing already.


Unity 5 Editor Mode with Ray Tracing

As for the consumer standpoint, there is a basic question over whether OEMs and end users will be interested in the technology. With Imagination being the first and only GPU vendor implementing ray tracing (so far), they need to be able to convince OEMs and end users that this technology is worthwhile, a task that’s far from easy in the hyper-competitive SoC GPU space. In the end Imagination needs to sell their customers on the idea of ray tracing, and that it’s worth the die space that would otherwise be allocated to additional rasterization hardware. And then they need to get developers to create the software to run on this hardware.

Ultimately by including dedicated ray tracing hardware in their GPUs, Imagination is taking a sizable risk. If their ideas fully pan out then the reward is the first mover advantage, which would cement their place in the SoC GPU space and give them significant leverage as other GPU vendors would seek to implement similar ray tracing hardware. The converse would be that Imagination’s ray tracing efforts fizzle, and while they have their equally rasterization capable Rogue designs to rely on, it means at a minimum they would have wasted their time on a technology that went nowhere.

With that in mind, all we can say is that it will be extremely interesting to see where this goes. Though ray tracing is not new in the world of GPUs, the inclusion of dedicated ray tracing hardware is absolutely groundbreaking. We can’t even begin to predict what this might lead to, but with SoC GPUs usually being an echo of desktop GPUs, there’s no denying that we’re seeing something new and unprecedented with Wizard. And just maybe, something that’s the start of a bigger shift in real time graphics.

Finally, today’s announcement brings to mind an interesting statement made by John Carmack a bit over a year ago when he was discussing ray tracing hardware. At the time he wrote “I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures.” As is often the case with John Carmack, it looks like he may just be right after all.

Update: Speaking of John Carmack, here's another interesting quip from him, written this afternoon after seeing the Wizard announcement.

Source: Imagination Technologies

Comments Locked

45 Comments

View All Comments

  • TETRONG - Tuesday, March 18, 2014 - link

    I hinted at this about a month ago on VR-Zone

    "Apple is invested in Imagination because of graphics..not CPU. Ray-tracing is set to become important in the near future and Imagination has something of a lock on this."

    http://vr-zone.com/articles/arms-ian-ferguson-inte...
  • twtech - Tuesday, March 18, 2014 - link

    The biggest boost for a technology like this would be to get something like a console design win.

    If using their hardware, you can make a game using raytracing effects that renders at 30+ FPS at normal game resolutions, then I think certainly there are developers that would be interested in what they could do with that.

    The problem is that very few developers want to write a bunch of custom rendering code and build special art for the sake of a handful of players that have some custom hardware - unless maybe you're paying them to do it.
  • DanNeely - Tuesday, March 18, 2014 - link

    Unless flagging sales force Nintendo to refresh the Wii U there aren't likely to be any major consoles for Imagination to win anytime soon. Low end android stick platforms don't count at present because none of them are operating in sufficient volume to drive technology as opposed to being a target for ports of existing titles and low budget indie games.
  • Kevin G - Tuesday, March 18, 2014 - link

    Well PowerVR did make it into the Dreamcast back in the day....
  • vFunct - Wednesday, March 19, 2014 - link

    Maybe a new console from Samsung/Google/Apple/Amazon?
  • BMNify - Tuesday, March 18, 2014 - link

    without a real time fractal ray tracing block and code to use that for things like fractal image sharpening ,perhaps while turning that bitmap into a fractal 3D map then it seems a little underwhelming in 2014 , are they even going to include a dedicated fast and low power wideIO FiFO memory block in there any time soon ?
  • ryszu - Tuesday, March 18, 2014 - link

    WideIO and other high bandwidth memory technologies aren't something we implement. That's really something for our customers to take care of (with our help of course).
  • BMNify - Wednesday, March 19, 2014 - link

    ryszu, it may be a false assumption on my part, but i would have thought you at least did some initial trials with your test chips with both wideIO and 2/4 stacked HMC to see where you can optimize the designs for the coming UHD-1 and especially the later UHD-2 7680x4320 10bit/12bit screens.

    surely someone needs to finally take that step and actually use WideIO on their IP Soc before samsung twist it and make "widcon" the official bad name ;) it was suposed to be out in retail last year and we all know that will make your kit fly a lot faster longer term at lower power usage , so cant imagination use some and STRONGLY advise the next customer to use it as a show peace, please ask the board to authorise that asap, AS PEOPLE WILL BUY IT.

    regarding the fractal block and generic fractal image sharpening code, you would be wise to just download a current FFMPEG and add any such fractal ray tracing code as a plugin filter for that to get maximum coverage , just ask them on IRC to review any such code you might release to get it included easily oc....
  • ryszu - Wednesday, March 19, 2014 - link

    Our emulation platform is fully flexible in that respect. We can connect the GPU to something that represents a next-gen interconnect for testing, to make sure it works well if the customer decides to implement it.

    I'll take a look at the fractal image sharpening, thanks for the pointer.
  • Kevin G - Tuesday, March 18, 2014 - link

    As neat as this announcement is in the mobile sector, I'm eager to see that this could do on the desktop. Far more power available to scale this architecture and genuinely reach movie level quality in real time at 1080p.

Log in

Don't have an account? Sign up now