Final Words

At a high level, the Titan supercomputer delivers an order of magnitude increase in performance over the outgoing Jaguar system at roughly the same energy price. Using over 200,000 AMD Opteron cores, Jaguar could deliver roughly 2.3 petaflops of performance at around 7MW of power consumption. Titan approaches 300,000 AMD Opteron cores but adds nearly 19,000 NVIDIA K20 GPUs, delivering over 20 petaflops of performance at "only" 9MW. The question remains: how can it be done again?

In 4 years, Titan will be obsolete and another set of upgrades will have to happen to increase performance in the same power envelope. By 2016 ORNL hopes to be able to build a supercomputer capable of 10x the performance of Titan but within a similar power envelope. The trick is, you don't get the performance efficiency from first adopting GPUs for compute a second time. ORNL will have to rely on process node shrinks and improvements in architectural efficiency, on both CPU and GPU fronts, to deliver the next 10x performance increase. Over the next few years we'll see more integration between the CPU and GPU with an on-die communication fabric. The march towards integration will help improve usable performance in supercomputers just as it will in client machines.

Increasing performance by 10x in 4 years doesn't seem so far fetched, but breaking the 1 Exaflop barrier by 2020 - 2022 will require something much more exotic. One possibility is to move from big beefy x86 CPU cores to billions of simpler cores. Given ORNL's close relationship with NVIDIA, it's likely that the smartphone core approach is being advocated internally. Everyone involved has differing definitions of what is a simple core (by 2020 Haswell will look pretty darn simple), but it's clear that whatever comes after Titan's replacement won't just look like a bigger, faster Titan. There will have to be more fundamental shifts in order to increase performance by 2 orders of magnitude over the next decade. Luckily there are many research projects that have yet to come to fruition. Die stacking and silicon photonics both come to mind, even though we'll need more than just that.

It's incredible to think that the most recent increase in supercomputer performance has its roots in PC gaming. These multi-billion transistor GPUs first came about to improve performance and visual fidelity in 3D games. The first consumer GPUs were built to better simulate reality so we could have more realistic games. It's not too surprising then to think that in the research space the same demands apply, although in pursuit of a different goal: to create realistic models of the world and universe around us. It's honestly one of the best uses of compute that I've ever seen.

Applying for Time on Titan & Supercomputing Applications
POST A COMMENT

130 Comments

View All Comments

  • karasaj - Wednesday, October 31, 2012 - link

    We should see what kinds of frames we get :)

    (Yes you'd have single threaded cpu bottleneck, but I can dream)
    Reply
  • N4g4rok - Wednesday, October 31, 2012 - link

    I bet that microstutter's a bastard though. Reply
  • Alexvrb - Saturday, November 03, 2012 - link

    Put it in AFR mode! :P Reply
  • hansmuff - Sunday, November 04, 2012 - link

    Hmm, with this kind of power, just render ALL POSSIBLE frames ahead for a full second, and just flip the display to the framebuffer that corresponds to the gameplay :) Reply
  • Rookierookie - Wednesday, October 31, 2012 - link

    Yes, but can it run Crysis? Reply
  • SilthDraeth - Wednesday, October 31, 2012 - link

    Wrong question. The correct question is;

    Will it blend?
    Reply
  • losttsol - Wednesday, October 31, 2012 - link

    Yes it can, as long as Crysis isn't running on top of Windows Vista. Reply
  • inighthawki - Wednesday, October 31, 2012 - link

    What does Vista have to do with this? Reply
  • RussianSensation - Wednesday, October 31, 2012 - link

    Over 9000 fps!

    Good to see GPUs gaining traction outside of videogames, paving way for their use as a general purpose devices that can benefit a wide variety of usage patterns outside of games :) Hopefully the profits from these will mean even better GPUs for us gamers down the line.
    Reply
  • CeriseCogburn - Saturday, November 10, 2012 - link

    You mean nVidia GPU's gaining traction, and far outperforming amd cores. Reply

Log in

Don't have an account? Sign up now