Final Words

At a high level, the Titan supercomputer delivers an order of magnitude increase in performance over the outgoing Jaguar system at roughly the same energy price. Using over 200,000 AMD Opteron cores, Jaguar could deliver roughly 2.3 petaflops of performance at around 7MW of power consumption. Titan approaches 300,000 AMD Opteron cores but adds nearly 19,000 NVIDIA K20 GPUs, delivering over 20 petaflops of performance at "only" 9MW. The question remains: how can it be done again?

In 4 years, Titan will be obsolete and another set of upgrades will have to happen to increase performance in the same power envelope. By 2016 ORNL hopes to be able to build a supercomputer capable of 10x the performance of Titan but within a similar power envelope. The trick is, you don't get the performance efficiency from first adopting GPUs for compute a second time. ORNL will have to rely on process node shrinks and improvements in architectural efficiency, on both CPU and GPU fronts, to deliver the next 10x performance increase. Over the next few years we'll see more integration between the CPU and GPU with an on-die communication fabric. The march towards integration will help improve usable performance in supercomputers just as it will in client machines.

Increasing performance by 10x in 4 years doesn't seem so far fetched, but breaking the 1 Exaflop barrier by 2020 - 2022 will require something much more exotic. One possibility is to move from big beefy x86 CPU cores to billions of simpler cores. Given ORNL's close relationship with NVIDIA, it's likely that the smartphone core approach is being advocated internally. Everyone involved has differing definitions of what is a simple core (by 2020 Haswell will look pretty darn simple), but it's clear that whatever comes after Titan's replacement won't just look like a bigger, faster Titan. There will have to be more fundamental shifts in order to increase performance by 2 orders of magnitude over the next decade. Luckily there are many research projects that have yet to come to fruition. Die stacking and silicon photonics both come to mind, even though we'll need more than just that.

It's incredible to think that the most recent increase in supercomputer performance has its roots in PC gaming. These multi-billion transistor GPUs first came about to improve performance and visual fidelity in 3D games. The first consumer GPUs were built to better simulate reality so we could have more realistic games. It's not too surprising then to think that in the research space the same demands apply, although in pursuit of a different goal: to create realistic models of the world and universe around us. It's honestly one of the best uses of compute that I've ever seen.

Applying for Time on Titan & Supercomputing Applications
Comments Locked

130 Comments

View All Comments

  • piroroadkill - Wednesday, October 31, 2012 - link

    That sounds like a downgrade, no matter how you slice it..
  • extide - Wednesday, October 31, 2012 - link

    x2 I was thinking the same, especially at only 2.2Ghz!! I bet they are ~flat on CPU power and all the gain is from the GPU's.
  • SunLord - Friday, November 2, 2012 - link

    HPC is all highly multi-threaded by it's very nature which just happens to be about the only thing bulldozer is some what good at
  • Jorange - Wednesday, October 31, 2012 - link

    I wonder how many Petaflops this beast would have achieved if it used Sandy Bridge EP class chips? Anandtech's review of the Opteron 6276 vs Sandy Bridge Xeon EP showed that Intel was far more performant.
  • SunLord - Friday, November 2, 2012 - link

    I doubt will make enough of difference to be worth it given the main focus is all on the cuda gpu compute side
  • CeriseCogburn - Saturday, November 10, 2012 - link

    The AMD crap cores probably cause huge bottlenecks and lag the entire system and wind up as a large loss overall as they waste computer time.
  • Jorange - Wednesday, October 31, 2012 - link

    In a world in which millions of morons are enthralled by Honey Boo Boo and her band of genetic regressionists, it is great that scientists are advancing our understanding of the Universe. Without those 1%, one can only imagine the state our planet would be in.
  • IanCutress - Wednesday, October 31, 2012 - link

    I ported some Brownian motion code from CPU to GPU for my thesis and got a considerable increase (4000x over previously published data). Best thing was that the code scaled with GPUs. Having access to 20k GPUs with 2688 CUDA cores would just be gravy. Especially when simulating 10^12 and beyond independent particles.
  • maximumGPU - Wednesday, October 31, 2012 - link

    4000x ?! i don't think i've ever seen such a speedup, was that simply from 1 cpu to a 1 gpu?
    i ported a monte carlo risk simulation (which also uses brownian motion, although i suspect for different purposes than yours) and saw about 300-400X speed up, thought that was at the top end of what you can get in terms of speed increases.
  • IanCutress - Thursday, November 1, 2012 - link

    It helped that the previously published data was a few generations back, so I had some Moore's Law advantage. The type of simulation for that research was essentially dropped there and then because it was so slow, and no-one had ever bothered to do it on newer hardware. I think a 2.2 GHz Nehalem single core simulation of my code compared to a GTX480 version of the code was 350x jump or so. Make that 16 cores vs 1 GPU (for a DP system) and it makes it more like 23x.

Log in

Don't have an account? Sign up now