Tesla, CUDA, and the Future

We haven't been super excited about the applicability of CUDA on the desktop. Sure, NVIDIA has made wondrous promises and bold claims, but the applications for end users just aren't there yet (and the ones that are are rather limited in scope and applicability). But the same has not been true for CUDA in the workstation and HPC markets.

Tesla, NVIDIA's workstation level GPU computing version of its graphics cards (it has no display output and is spec'd a bit differently) has been around for a while, but we are seeing more momentum in that area lately. As of yesterday, NVIDIA has announced partnerships with Dell, Lenovo, Penguin Computing and others to bring desktop boxes featuring 4 way Tesla action. These 4-Tesla desktop systems, called Tesla Personal Supercomputers, will cost less than $10k US. This is an important number to come in under (says NVIDIA) because this is below the limit for discretionary spending at many major universities. Rather than needing to follow in the footsteps of Harvard, MIT, UIUC, and others who have built their own GPU computing boxes and clusters, universities and businesses can now trust in a reliable computing vendor to deliver and support the required hardware.

We don't have any solid specs on the new boxes yet. Different vendors may do things slightly differently and we aren't sure if NVIDIA is pushing for a heavily standardized box or will give these guys complete flexibility. But regardless of the rest of the box, the Tesla cards themselves are the same cards that have been available since earlier this year.

These personal supercomputers aren't going to end up in homes anytime soon, as they are squarely targeted at workstation and higher level computing. But that doesn't mean this development won't have an impact on the end user. By targeting universities through the retail support of their new partners in this effort, NVIDIA is making it much more attractive (and possible) for universities to teach GPU computing and massively parallel programming using their hardware. Getting CUDA into the minds of future developers will go a long way, not just for the HPC market, but for every market touched by these future graduates.

It's also much easier for an engineer to sell a PHB on picking up "that new Dell system" rather than a laundry list of expensive components to be built and supported either by IT staff or by the engineer himself. Making in roads into industry (no matter the industry) will start getting parts moving, expose more developers to the CUDA environment, and create demand for more CUDA developers. This will also help gently nudge students and universities towards CUDA, and even if the initial target is HPC research and engineering, increased availability of hardware and programs will attract students who are interested in applying the knowledge to other areas.

It's all about indoctrination really. Having a good product or a good API does nothing without having developers and support. The more people NVIDIA can convince that CUDA is the greatest thing since sliced bread, the closer to the greatest thing since sliced bread CUDA will become (in the lab and on the desktop). Yes, they've still got a long long way to go, but the announcement of partners in providing Tesla Personal Supercomputer systems is a major development and not something the industry (and especially AMD) should under appreciate.

Driver Performance Improvements Final Words
Comments Locked

63 Comments

View All Comments

  • chizow - Thursday, November 20, 2008 - link

    quote:

    We'll certainly see after we run all the tests, but stay tuned for an update on that area.


    When are we going to see an updated, comprehensive review? You mentioned something about a huge review with the new Core i7 test bed over a month ago and the NDA on i7 was lifted over two weeks ago. Still no update or comprehensive GPU review.

    You guys have been making a lot of half-assertions and assumptions promising follow-ups but have consistently failed to follow through on them.
  • CPUGuy - Thursday, November 20, 2008 - link

    I find it odd that someone can admit that both camps have their driver problem yet so asphyxiated on camp's problems more so then the other. When both camps are examined in a petri dish under a microscope it becomes apparent that both camps have their share of problems that effects everyone. Not just the consumers from one camp vs another.

    In all that's why we love to read articles & reviews that are fair and equitable. Which seeks the truth in a unbiased fashion that provides not only truth from just one side but from "both sides" of the coin. Not to be vague:
    -if a driver improves performance, provide picks to show IQ
    -if one card is faster then another, let it do so from it's original standard clock rate.
    -if one card is faster then another overclocked, let both opposing cards show the same percentage of overclock
    -etc

  • CPUGuy - Thursday, November 20, 2008 - link

    ...so asphyxiated on one camp's problems more so then the other...
    -if a driver improves performance, provide photos to show IQ
  • GaryJohnson - Saturday, November 22, 2008 - link

    ...asphyxiated ...

    I think the word you were looking for is fixated.
  • poohbear - Thursday, November 20, 2008 - link

    you guys looked at crysis and oblivion and other games that were'nt even mentioned by Nvidia when downloading the drivers. The games nvidia mentions are:

    Up to 10% performance increase in 3DMark Vantage (performance preset)
    Up to 13% performance increase in Assassin's Creed
    Up to 13% performance increase in BioShock
    Up to 15% performance increase in Company of Heroes: Opposing Fronts
    Up to 10% performance increase in Crysis Warhead
    Up to 25% performance increase in Devil May Cry 4
    Up to 38% performance increase in Far Cry 2
    Up to 18% performance increase in Race Driver: GRID
    Up to 80% performance increase in Lost Planet: Colonies
    Up to 18% performance increase in World of Conflict

    I personally noticed a smoother framerate in World in Conflict & Crysis warhead on my 8800gt. Company of Heroes opposing fronts didnt seem to play any different, but the others were definetly smoother. Just my 2 cents.
  • Spacecomber - Thursday, November 20, 2008 - link

    Download and install a new video driver? If the games you are playing are supported, as well as any extra features that you need, I don't see any advantage to jumping on-board with every new driver release.

    I usually wait until I've picked up a new game that reveals some limitation in the driver or until I've upgraded to a new graphics card. Driver releases primarily seem to focus on better supporting the latest games and providing drivers for the latest generation GPUs. Occasionally, you'll see new features introduced, like better support for running two video cards while using one for PhysX acceleration, as in this driver release. However, that seems to be the exception, rather than the rule, which perhaps justifies Anandtech's write-up on this particular driver release.
  • The0ne - Thursday, November 20, 2008 - link

    For what you're saying people should then head over to guru3d.com or use omega drivers that other users have tested on specific games. I'm running 174/175 on mine right now simply because it doesn't choke on the dual display output to my TV, doesn't lag it and is pretty stable with general tasks. I can't say much in terms of games because the only one I play is FFXI and it really on suffers under Vista.

    However, not a single driver package is perfect as their will always be some issue waiting to surface. Just take that to heart when trying different versions.
  • StillPimpin - Thursday, November 20, 2008 - link

    I am very interested to know if dual monitor SLI support is/will be enabled on the next round of Quadro drivers or is this only related to the Geforce line?
  • pmonti80 - Thursday, November 20, 2008 - link

    A little bit off topic, but:
    In what kind of motherboards will you be able to use PhysX SLI? Will there be the same limitations than with normal SLI? (only half of Nvidia chipsets support it; on x58 boards SLI only is available on 9800 GTX or more)
    Which card is the minimum for Physx SLI?
  • chizow - Thursday, November 20, 2008 - link

    PhysX SLI can be enabled on any chipset with multiple PCIE x16 slots, even ones that don't support native GPU SLI. I believe all cards beginning with the 8800 series supports GPU-accelerated PhysX.

Log in

Don't have an account? Sign up now