Back to Article

  • Jorange - Monday, November 12, 2012 - link

    I need 17 petaflops in my PC, 2025 I'm waiting! Reply
  • tviceman - Monday, November 12, 2012 - link

    Does anyone know what the BlueGene/Q power draw is when performing the same LINPACK benchmark? Just curious to see what / if any perf/watt improvements Titan makes over BlueGene/Q Reply
  • Jorange - Monday, November 12, 2012 - link

    7890.00 kW
  • Gnarr - Monday, November 12, 2012 - link

    BlueGene/Q 16325TFlop / 7890kW = 2,069kW
    Titan 17590TFlop / 8209kW = 2,143kW
  • Gnarr - Monday, November 12, 2012 - link

    that was supposed to be
    BlueGene/Q 16325TFlop / 7890kW = 2,069 TFlop/kW
    Titan 17590TFlop / 8209kW = 2,143 TFlop/kW
  • tviceman - Monday, November 12, 2012 - link

    Right, thanks. So Titan was barely able to edge out BlueGene/Q. IBM seems like more of a competitor to Nvidia tin HPC han Intel or AMD! Reply
  • Ktracho - Monday, November 12, 2012 - link

    That has been the case for the last several years, but the evidence is not the performance, but rather the number of systems IBM has in the top 10. It will be interesting to see if IBM can keep up. I understand they abandoned a contract they had with Univ. of IL, to Cray's benefit. Also, I wonder how easy it is to program IBM's BlueGene/Q. Anyone know? Reply
  • Death666Angel - Monday, November 12, 2012 - link

    It's still "Flops" though, the s is part of the acronym, otherwise the unit doesn't make any sense. :D Reply
  • Jorange - Monday, November 12, 2012 - link

    According to TOP 500 list BlueGene/Q has a theoretical max of 20 petaflops, but achieves 16 Pflops in Linpack. Titan has a max of 27 Pflops, yet 'only' achieves 17 Pflops in Linpack. It seems much harder to fully utilize the full power of CPU / GPU systems, maybe Intel is right about Xeon Phi. Reply
  • Khato - Monday, November 12, 2012 - link

    Not necessarily - the systems with Xeon Phi are only averaging a similar efficiency of around 65% theoretical. The 'advantage' of Xeon Phi has always been comparative ease of programming, so we'll have to see if that actually plays out or not. The fact that there are already 6 entries in the TOP500 list using it, with Stampede at the #7 spot despite only being ~1/3 population is certainly a good start for a new product line. Reply
  • Ktracho - Monday, November 12, 2012 - link

    My wild guess is that it is popular because Intel is providing grants to institutions willing to harbor these machines. Note, however, that they are not nearly as power efficient as IBM or Cray. I would guess they will argue that power efficiency will improve with each generation of their accelerators. Reply
  • cjl - Wednesday, November 14, 2012 - link

    The latest Green500 list (power efficiency rankings of supercomputers) would disagree with you there - the top system is an Intel Xeon Phi, at 2500 MFLOPS/W. Reply
  • TheJian - Wednesday, November 14, 2012 - link

    Where do you get that it's easy to code for (other than Intel engineer's saying it). Larabee failed on the desktop and never came out (if memory serves) because (partially) it was thought to be complicated to keep it running loaded and wouldn't be optimized for by coders who'd prefer to stay with old lazy tech coding they already knew. It would have been highly programmable, but I never thought that meant easy.

    ""Larrabee silicon and software development are behind where we hoped to be at this point in the project," stated Intel in a email to DailyTech."

    "Intel recognized the importance of software and drivers to Larrabee's success, leading to the creating of the Intel Visual Computing Institute at Saarland University in Saarbrücken, Germany. The lab conducts basic and applied research in visual and parallel computing."

    If it was easy to code for I'd think they shouldn't have been behind with their own software efforts. Intel has huge resources compared to NV/AMD. But was beaten by both to 2TFlops and then 5TF etc.

    I agree with Ktracho...I think their seeding these, not people investing in them (yet? could change). If Intel offers some free chips, you use them unless your stupid :) On the other hand, I think CRAY pays for Nvidia's ;)
  • eddman - Monday, November 12, 2012 - link

    That's not it.

    Jaguar XT5-HE, no GPU: 75.4%
    Jaguar XK6, Tesla 2090: 73.8%
    Titan: 64.8%
    Stampede, Xeon Phi: 67.1%

    Titan and Stampede rate about the same in efficiency.

    The interesting thing here is that Jaguar XK6 was almost as efficient as the CPU only XT5.

    Why Titan dropped 9%? Could it be that the Gemini interconnect isn't able to cope with such a load? Is it because of unoptimized software?

    If so, does it mean that Titan can achieve a higher rate later on?
  • MrSpadge - Monday, November 12, 2012 - link

    I wouldn't rule out that they'll be able to squeeze a few more PFlops out of it via software tuning. We don't have much experience yet with such systems ;) Reply
  • biostud - Tuesday, November 13, 2012 - link

    I wonder how efficient their PSU's are? Reply
  • Arbie - Thursday, November 15, 2012 - link

    Actually, what I really wonder is why we are paying for this project. What is ORNL going to do with it other than set records and boost the chipmakers' bottom lines? What simulations are we relying on that just have to run ten times faster than they were before? The only obvious one is weather prediction, where real-time could matter a lot. So I would suppose that the weather service would own the computer - not ORNL.

    At least it appears to be a success as a project, which is all too rare in government agencies.
  • cjl - Thursday, November 15, 2012 - link

    Lots of scientific research can benefit from as much computing power as you can throw at it. You already mentioned one of the big ones: weather. In addition to that however, aerodynamics research can use a huge amount of CPU power to run CFD, which will help future aircraft fly more efficiently and possibly faster. Coupled multiphysics simulations (aerodynamics, thermal, and structural simulations rolled into one) also require enormous computational power, and they allow for improved design of high speed aircraft, spacecraft, and reentry vehicles for the space program. Nuclear simulations can both help simulate nuclear detonations, including how aging has affected the nuclear arsenal (which is probably going to happen more on LLNL's Sequoia supercomputer rather than this one, but it is a comparably powerful machine) and simulate nuclear reactors, which could be a substantial portion of our move away from fossil fuels in the future.

    More in the realm of basic, rather than applied research, astronomical simulations also require a huge amount of computing power. Everything from galactic and universal structure formation in the early universe to stellar evolution, including simulations of the last moments of a massive star going supernova requires hugely more computational power than even the most powerful computers today.

    They can even be used for simulations of things we take for granted - one good example of this is combustion simulation in internal combustion engines. Yes, we have perfectly functional engines already, but to properly simulate the combustion takes an enormous amount of computing power, and once it can be properly simulated, it can be improved and optimized, helping reduce fuel usage and emissions from new cars.
  • Arbie - Friday, November 16, 2012 - link

    Thanks for the thorough reply. Some of your examples are areas that I never thought of. Reply
  • hmoobphajej - Friday, November 16, 2012 - link

    Anand went over in great detail about the capabilities of ORNL's super computer. You should really read it and watch the videos he posted because they are pretty interesting. Even with the super computer as powerful as it is, it isn't powerful enough for many things scientist still wants to do. Reply

Log in

Don't have an account? Sign up now