We’re here at NVIDIA’s GPU Technology Conference (GTC) 2012, where NVIDIA is holding their semi-annual professional developers conference. There’s been a great deal announced that will take a few days to completely go over, but for now we wanted to start on the product side with NVIDIA’s major product announcements.  With the launch of GK104 back in March NVIDIA is now ready to start rolling out some of their professional productions, and while the next generation of Quadro is not yet ready, Tesla is another matter. This brings us to our first part of our GTC coverage: the next generation of Tesla cards, Tesla K10 and Tesla K20.

  Tesla K20* Tesla K10 Tesla M2090
Stream Processors <=2880 2 x 1536 512
Texture Units <=240 2 x 128 64
ROPs <=48 2 x 32 48
Core Clock ? 745MHz 650MHz
Shader Clock N/A N/A 1300MHz
Memory Clock ? 5GHz GDDR5 3.7GHz GDDR5
Memory Bus Width 384-bit 2 x 256-bit 384-bit
L2 Cache <=1.5MB 2 x 512KB 768KB
VRAM ? 2 x 4GB 6GB
ECC Full Partial (DRAM) Full
FP64 1/3 FP32 1/24 FP32 1/2 FP32
TDP ? 225W 225W
Transistor Count 7.1B 2 x 3.5B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 40nm

The first of the new Teslas, and the only model slated to be available in the near future is the Tesla K10. In an interesting turn of events, Tesla K10 will be based on GK104. Specifically it’s a dual-GPU card based on NVIDIA’s recently launched GTX 690, modified to fit the needs of the GPU compute market. Previous generation Tesla cards have always been based on NVIDIA’s top-tier GPUs – GT200 and GF100/GF110 respectively – so this is the first time NVIDIA has ever split the Tesla market in this way by using a lower tier GPU.

The fact of the matter is that with GK104 first launching in GeForce products, NVIDIA downplayed GK104’s compute capabilities. And our own benchmarking has established that GTX 680’s compute performance is anywhere between slightly ahead of the Fermi based GTX 580 to well behind it. Being a descendant of GF114, GK104 had a fair bit of its compute capabilities stripped out relative to GF110, not the least of which is double-precision floating point performance, ECC cache protection, and a about half of the number of registers per CUDA core relative to Fermi.

Given the questionable compute performance of GK104, this makes NVIDIA’s decision to launch a Tesla part based on it quite unexpected.  Still, this is not to say that GK104 can’t perform well in the right situations and this is exactly what NVIDIA designing K10 around. The fact that we’ve found GK104 cards to be slow at compute workloads at times is not lost on NVIDIA; they know better than anyone else what GK104 really can and can’t do and have planned accordingly. For that reason NVIDIA is breaking from what little tradition there is with Tesla as a broad market product and pitching K10 at a very specific market.

NVIDIA’s market strategy here is actually summed up rather well in their K10 press release: “NVIDIA Tesla K10 GPU Accelerates Search for Oil and Gas Reserves, Signal and Image Processing for Defense Industry.” GK104 lacks the ECC and compute flexibility of the Fermi Tesla cards, but what it doesn’t lack is single-precision compute performance and memory bandwidth; and with a dual-GPU card in particular it has both of those in spades. Accordingly, NVIDIA’s goal for K10 is to go after the specific market segments that don’t need ECC and don’t need flexibility, but do need all the raw compute performance they can get. This as it turns out is something gamers are already familiar with: image processing. Image processing doesn’t need the incredible levels of precision that pure computational work does and for that matter it’s rather tolerant of the errant error, so NVIDIA believes there’s a suitably large market there that can be served by GK104 rather than GK110.

With that said, I must admit that if GK110 had come first I don’t know if we’d be having this conversation. Even if a dual GK104 card is faster splitting their market like this is not an easy to move to make. But with GK110 not due in retail for another 5-6 months it’s obviously NVIDIA’s only choice if they want to get new Tesla cards out on the market before the end of the year.

In any case we’ll know more about the full performance of K10 soon enough. Based on GTX 680 I think we already have a good idea of GK104’s basic strengths and weaknesses, but I also have to consider the possibility that NVIDIA has been sandbagging the GTX 600 series’ compute performance. NVIDIA has handicapped GeForce performance in a few different ways for quite a number of years in order to create distinct market segments, first for Quadro and more recently for Tesla.  With GTX 580 this was done by handicapping both double-precision and geometry performance, but because GK104 is inherently weak at double-precision NVIDIA would need to handicap the GTX 600 series in some other manner if they wanted to maintain this kind of market segmentation.  So perhaps GK104 is actually faster at compute than what we’ve seen so far?

Wrapping things up, while NVIDIA hasn’t posted every last spec for K10 they have posted enough for us to work with.  Like GTX 690 K10 is using fully enabled GK104 GPUs, so based on NVIDIA’s theoretical performance data of 4.58TFLOPs with 320GB/sec of bandwidth it’s almost certainly clocked at around 745MHz core and 5GHz memory. Meanwhile for memory the card has 8GB of GDDR5, which breaks down to 16 2Gb GDDR5 modules per GPU for a total of 32 on the card. TDP is said to be identical to M2090, which would make it a 225W part.  Finally, as far as availability and pricing is concerned officially K10 is available “now” though in practice partners won’t be shipping cards and systems until closer to the end of the month. Pricing is expected to be close to that of the M2090 it replaces, which would mean we’re looking at $2500 and higher.

Tesla K20 - The First GK110 Product
POST A COMMENT

50 Comments

View All Comments

  • Ananke - Thursday, May 17, 2012 - link

    I would bet 1/1 or 1/2 is better, but we will see the outcome of the battle when Handbrake gets OpenCL acceleration on both NVidia and AMD units.

    I think Radeons are better for heavy computations, but then the sheer user base of CUDA is larger - it is just cheaper to replace Fermi with Kepler accelerators.
    Reply
  • Ryan Smith - Thursday, May 17, 2012 - link

    These are the same FP64 CUDA Cores that we saw on GK104. So they cannot do FP32. They can only be used for FP64 operations. Reply
  • kilkennycat - Thursday, May 17, 2012 - link

    I have been working closely with a professional developer of CUDA applications targeting real-time processing of Broadcast-Quality HD video signals (frame-rate conversion, re-sizing, slow-motion, de-interlacing etc) using multiple GTX5xx cards operating in tandem (no need for SLI) None of the underlying algorithms require any Double-Precision computation, Single-Precision is more than adequate.

    After much waiting, last week he managed to finally acquire a GTX680 board and ran benchmarks for that board against a GTX580. He found that the average GTX680 conversion frame-rate on an extensive test-suite dropped to less than 2/3 of that available with a single GTX580, actually only a little better than the frame--rate available from a previously-tested 900MHz overclocked GTX560Ti. Further technical analysis revealed conclusively that the 256-bit memory interface of the GK104 was the performance killer.....
    Reply
  • krslavin - Sunday, May 20, 2012 - link

    I have been doing the exact same thing! I ran tests on my frame-rate converter and the GTX 680 got only 70% of a GTX 580! I also thought it was the reduced cache bandwidth because there are only 8 SMs instead of 16. I also do not use double precision FP, but it seems that specmanship and marketing wins out over common sense ;^( Reply
  • g101 - Saturday, July 14, 2012 - link

    Indeed it does... Reply
  • Ursus.se - Saturday, May 19, 2012 - link

    What I can understands fit this unit into remarks to me from inside Apple about a new real up to date MacPro for FCPX user...
    Lets hope...
    Ursus
    Reply
  • g101 - Saturday, July 14, 2012 - link

    You can also shut the fuck up.

    Professionals don't use "FCPX!!!" . Someone needs to grasp you around the neck and wake your obviously dormant brain into a state of at least partial activity.

    Reply
  • garyab33 - Tuesday, May 22, 2012 - link

    I really hope 2 of these cards is SLI will be finally able to run Metro 2033 at 1920x1080 with everything maxed out (DoF & Tessellation ON) and Crysis 1 (16AA and 16AF), also everything set to highest in NV control panel, at stable 100-120 fps. I don't care about the heat nor the power consumption of GK110 as long as it produce 60+ fps in most demanding games, and in SLI above 100fps. That's how the games are supposed to be played. Gtx 680 is a joke, it is slightly better than gtx 580 in Metro 2033. Doesnt worth the wait and price for gtx 680. Reply
  • CeriseCogburn - Wednesday, May 23, 2012 - link

    Like I said.
    Even after 680's are present and sold, people like you say they can't be built.
    I do wonder if you're wearing your shoes after you attempt to tie the shoelaces.
    No looking down before you answer.
    Reply
  • g101 - Saturday, July 14, 2012 - link

    Shut the fuck up dumbass.

    You never have anything worthwhile to say, it's always some unintelligible pro-nvidia rubbish that doesn't even address the previous comments.
    There's no question in ours minds, we know nvidia can make a hobbled bit of rubbish and sell it to little dipshits who spend all their time commenting about their favorite computer hardware brand.

    Every. Single. Article.
    Reply

Log in

Don't have an account? Sign up now