Final Words

At a high level, the Titan supercomputer delivers an order of magnitude increase in performance over the outgoing Jaguar system at roughly the same energy price. Using over 200,000 AMD Opteron cores, Jaguar could deliver roughly 2.3 petaflops of performance at around 7MW of power consumption. Titan approaches 300,000 AMD Opteron cores but adds nearly 19,000 NVIDIA K20 GPUs, delivering over 20 petaflops of performance at "only" 9MW. The question remains: how can it be done again?

In 4 years, Titan will be obsolete and another set of upgrades will have to happen to increase performance in the same power envelope. By 2016 ORNL hopes to be able to build a supercomputer capable of 10x the performance of Titan but within a similar power envelope. The trick is, you don't get the performance efficiency from first adopting GPUs for compute a second time. ORNL will have to rely on process node shrinks and improvements in architectural efficiency, on both CPU and GPU fronts, to deliver the next 10x performance increase. Over the next few years we'll see more integration between the CPU and GPU with an on-die communication fabric. The march towards integration will help improve usable performance in supercomputers just as it will in client machines.

Increasing performance by 10x in 4 years doesn't seem so far fetched, but breaking the 1 Exaflop barrier by 2020 - 2022 will require something much more exotic. One possibility is to move from big beefy x86 CPU cores to billions of simpler cores. Given ORNL's close relationship with NVIDIA, it's likely that the smartphone core approach is being advocated internally. Everyone involved has differing definitions of what is a simple core (by 2020 Haswell will look pretty darn simple), but it's clear that whatever comes after Titan's replacement won't just look like a bigger, faster Titan. There will have to be more fundamental shifts in order to increase performance by 2 orders of magnitude over the next decade. Luckily there are many research projects that have yet to come to fruition. Die stacking and silicon photonics both come to mind, even though we'll need more than just that.

It's incredible to think that the most recent increase in supercomputer performance has its roots in PC gaming. These multi-billion transistor GPUs first came about to improve performance and visual fidelity in 3D games. The first consumer GPUs were built to better simulate reality so we could have more realistic games. It's not too surprising then to think that in the research space the same demands apply, although in pursuit of a different goal: to create realistic models of the world and universe around us. It's honestly one of the best uses of compute that I've ever seen.

Applying for Time on Titan & Supercomputing Applications


View All Comments

  • Anand Lal Shimpi - Wednesday, October 31, 2012 - link

    It was an awesome trip, seriously one of the best. Talking to Dr. Messer was one of the highlights for sure, that guy is insanely smart and very passionate about his work.

    Old hardware is traded in when you order the next round of upgrades :)

    Take care,
  • Jaybus - Tuesday, February 19, 2013 - link

    Yes, great work! I suggest seeing about a trip to IBM Research or HRL Labs to investigate the DARPA SynAPSE project. That could be another really interesting trip and article. Reply
  • Mumrik - Wednesday, October 31, 2012 - link

    I guess we're finally beyond the bad "But will it run Crysis?" jokes.

    This was pretty amazing to watch. The challenges of putting something together at that scale are fascinating and intimidating.
  • dishayu - Wednesday, October 31, 2012 - link

    I am sad to never even have visited a datacenter. Would love to take a tour like this some day.

    And also, gaming finally started paying off in real world as well, that's pretty sweet as well. :D
  • poohbear - Wednesday, October 31, 2012 - link

    sure but can it play crysis?? Reply
  • GTRagnarok - Wednesday, October 31, 2012 - link

    Wouldn't it be so much more power efficient if they were able to use Intel's chips? Maybe they will redesign the whole thing in the future. Reply
  • A5 - Wednesday, October 31, 2012 - link

    It would, but you'd have to take that up with Cray. Reply
  • Reikon - Wednesday, October 31, 2012 - link

    Did anyone else notice on the second picture of the Titan Installation gallery that the guy is using a ridiculous amount of thermal paste for each CPU? Reply
  • IanCutress - Wednesday, October 31, 2012 - link

    In this environment, where stability is key, he was probably taught that having a bit more is safer than having a bit less. No doubt the data center was designed around airflow software to ensure that heating issues do not arise based on an 'average' application of thermal material. Reply
  • maximumGPU - Wednesday, October 31, 2012 - link

    here's to us gamers for advancing science and making the world a better place.
    You're welcome!

    Awesome article.

Log in

Don't have an account? Sign up now