NVIDIA - Fermi Up and Running, Tegra 2

NVIDIA finally showed off Fermi running in an Eyefinity-like 3-display setup:

The demo also used NVIDIA's sterescopic 3D technology - 3D Vision. We're hearing that the rumors of a March release are accurate, but despite the delay Fermi is supposed to be very competitive (at least 20% faster than 5870?). The GeForce GTX 265 and 275 will stick around for the first half of the year as Fermi isn't expected to reach such low price/high volume at the start of its life.

Aside from Fermi, Tegra 2 tablets and netbooks were on display:

NVIDIA hopes that OEMs will be able to use Tegra 2 to create intuitive 3D user interfaces, but so far I haven't seen anything too exciting.


Pegatron's Tegra 2 based Smartbook

Imagination Technologies - Faster GPUs for SoCs USB 3.0 and SATA 6Gbps on Motherboards in 2010
Comments Locked

40 Comments

View All Comments

  • Zool - Tuesday, January 12, 2010 - link

    I think that gf100 is different from the tesla die and that they have two gpu dies this time. The tesla die with maxed DP calculations and a similar GF100 die without the DP functionality. This site http://www.semiconductor.net/article/438968-Nvidia...">http://www.semiconductor.net/article/43...en_Calls... says 3.2 bilion transistors is the chip and that they have problems with the defect rates at TSMC with those sizes.

    Two 1.5 bilion transistor chips conected on one package would have much lover defect rates than gf100. Nvidia can dream about giant chips with zero defects but i think the future is in smaler dual, quad core gpus that can be efectivly manufactured. Similar road than the cpu-s had taken years ago with one difference. Gpu-s are super parallel so performance could be near linear with added cores all the time.
  • Calin - Monday, January 11, 2010 - link

    There are some issues here:
    1. NVidia wants Fermi to be faster overall, but it might be possible that it won't be faster than 5870 in DX11 for now. Drivers will help in the long way, and game engines optimised for Fermi will help also (maybe a lot).
    2. NVidia wants Fermi to succeed on the desktop, but their big hopes are for Fermi in the supercomputing arena. They might have chosen a poorer performing design for the desktop in order to crush everything in the race for teraflops.
  • Calin - Monday, January 11, 2010 - link

    There are some issues here:
    1. NVidia wants Fermi to be faster overall, but it might be possible that it won't be faster than 5870 in DX11 for now. Drivers will help in the long way, and game engines optimised for Fermi will help also (maybe a lot).
    2. NVidia wants Fermi to succeed on the desktop, but their big hopes are for Fermi in the supercomputing arena. They might have chosen a poorer performing design for the desktop in order to crush everything in the race for teraflops.
  • lopri - Saturday, January 9, 2010 - link

    That GMA is finally on par with equivalent offerings from AMD/NV (about which I still have my doubt) is the most exciting news of the CES 2010 for you?
  • Someguyperson - Monday, January 11, 2010 - link

    As a side note, the integrated graphics being able to handle AutoCAD isn't very impressive, as my High school computers with integrated GMA 950 graphics could handle AutoCAD and AutoCAD Inventor, however they would BSOD if you made ~10,000 lines and tried to zoom in and out really quickly. It was pretty fun.
  • Calin - Monday, January 11, 2010 - link

    Intel is fighting in the graphics market by price alone (pretty solid integrated hardware and drivers for it at very low quality and price). AMD and NVidia fight in this market by not-so-low prices but better quality.
    This will bring at least two advantages:
    1. Most of the computers will come with better integrated graphics (Intel owns more than half the integrated market graphics)
    2. NVidia and AMD are forced to either price-war with Intel (which they won't do in the integrated graphics department) or have better products.

    I for one am expecting the better products :)
  • - Sunday, January 10, 2010 - link

    he's been drinking to much of the Intel Kool aid - it's now a reflex
    asH
  • lopri - Saturday, January 9, 2010 - link

    It's a question, btw. :) I thought you had covered it prior to CES.
  • formulav8 - Sunday, January 10, 2010 - link

    Like some others, i'm not to impressed by Intels new graghics in itself.

    The graphics has direct access to the memory controller and STILL can't beat 2 year old solutions except in a few areas. To me that is pretty pathetic.

    Obviously though, others are quite impressed, like anand but I would expect much more from the gpu having ondie memory controller access, even Intels typical trash.



    Jason
  • JarredWalton - Sunday, January 10, 2010 - link

    I don't think anyone caught the two links Anand posted. The two Intel items that Anand was most impressed with are the wireless HD transmitter and the Moorestown smartphone demonstration. Arrandale and the new Intel IGP isn't one of the most impressive items from CES.

    Personally, I'm moderately impressed that Intel has finally put some real effort into improving their IGP. This is evidenced by the fact that a 20% increase in the number of shader cores/pipelines resulted in up to a 100% increase in performance.

    The GMA 900/950 series was horrible in comparison to the NVIDIA and ATI IGPs of the time (Radeon Xpress 1200/1250 and GeForce 6100). The next generation GMA 3000 stuff was still less than 1/4 the performance of the newer NVIDIA/ATI IGPs (Xpress 3200 and 7100 series). Then we had a lot of stagnation and the 8100 and 9300/9400 from NVIDIA and the HD 3200/4200 really didn't do much more compared to their last IGPs. The GMA 4500MHD has much of the HD decode support and with appropriate drivers is only about half the performance of ATI/NVIDIA IGPs. Now they have closed the gap, and I'll be surprised if ATI and NVIDIA do much more than about a 10-20% performance boost on IGPs (though naturally ATI will have a DX11 IGP).

    When you get right down to it, though, all IGPs still pretty much suck for 3D performance. They can handle minimum detail settings at 800x600 most of the time, and on less demanding titles you can even get 1366x768 with medium detail. Any $100 discrete GPU ends up being at least twice as fast, and with hybrid graphics really kicking off now that Win7 is out, if you want 3D performance there's really no reason not to find a laptop with a discrete GPU.

Log in

Don't have an account? Sign up now