Bridged or Native: Six of One, Half Dozen of Another?

Before looking at how NV45 solves some of NVIDIA's problems, we must first learn what those problems are. To understand this, we need to take a step back and look at the transition to PCI Express on a larger scale, and why ATI chose native while NVIDIA decided on bridged.

If ATI had gone with a bridged solution rather than native, we wouldn't even be having this discussion. Their choice to go native was based on their internal assessment of the playing field. Obviously, we will eventually end up with all GPUs sporting native PCIe interfaces. Bridging does add a little latency and reduces the maximum possible bandwidth benefit from PCIe (especially if it wasn't possible to overclock the AGP interface to what would be 16x speeds as NVIDIA has done with their solution). Native solutions are also easier for OEM's to integrate onto their boards. Based on all this, we can't really say that ATI made a bad decision to go with native PCI Express. So, why did NVIDIA take a difference stance on the issue?

NVIDIA is trying to be cautious about the issue. With native PCI Express support, it would be necessary to fabricate twice as many different types of GPUs as it currently does with the only difference being the PCIe interface. Regardless of the size of a company, putting an ASIC to silicon is not something to be taken lightly (it's really expensive, especially when talking about potentially low yield (222M transistor) parts). At the same time, with separate GPUs specifically targeted at PCIe, it is necessary to estimate how many PCIe parts will be needed. This means estimating the adoption rate of PCIe. It's much easier to make a bunch of GPUs and bridges and play it by ear. This all makes logical sense, and as long as there is a negligible impact on performance and costs (in other words, value) delivered to the end users, we can't fault NVIDIA for going in this direction.



NV45 without the fan and shroud installed.


And since we've shown performance not to be an issue, in terms of the end users, only cost remains to be seen. Of course, in our assessment of why NVIDIA went with a bridge, we didn't answer one of the advantages that ATI has with its native solution: ease of integration by OEMs. More importantly, the disadvantage of the bridge solution isn't simply the inclusion of another discreet component (and another possible point of failure), but the continued need to route another parallel bus on an already packed card. Granted, the close proximity of the HSI to the GPU on NVIDIA cards makes the routing problem much less significant, but it would still be easier just to route serial lines from the connector to the GPU.

There are multiple squishy issues surrounding this, so we will do a little speculation using what we know as a basis. OEMs are essentially locked into selling their video cards at set price points, such as the current famous $499, $399, and $299 targets. They can't be competitive unless they meet these goals. They can't meet these goals and stay profitable unless ATI and NVIDIA keep the cost of their components down. It's certainly more cost effective for NVIDIA to pass on the cost of a bridge chip along with its GPUs than for ATI to have to deal with the impact of constructing a whole separate line of parts. Of course, we have no clue what kind of prices that ATI and NVIDIA charge for their components. As there is competition in the market place, it would make sense for parts targeted at similar markets to have similar prices. But how much does NVIDIA take into consideration the integration cost increase of its bridge to OEMs? Maybe, if ATI has to charge more to cover its fabrication costs and NVIDIA has some room to breathe, they can. But it's more likely that the OEMs would have to eat the cost, or pass it on to the consumer.

The unfortunate bottom line is that we really don't know the facts and details of what goes on behind the scenes, but there is a certain allure to not having to deal with a discrete bridge component. So, what's an IHV who's decided against native support to do? Enter NV45.

Index NV45's on Package PCIe to AGP HSI
Comments Locked

14 Comments

View All Comments

  • kherman - Monday, June 28, 2004 - link

    How did this get eh NV45 label? Shouldn't this be the NV40p or sum'n?
  • Minotaar - Monday, June 28, 2004 - link

    Pentium Pro did NOT have on-package cache. Pentium Pro had On-DIE cache. Pentium 2 took a step backwards and had on-package cache (that huge ugly slot garbage, with the triple fans from OC co's like Glacier? The side two fans cooled cache chips on the side). It wasnt until Socket P3 that on-die cache came back.

    Thats why celeron happened the way it did. it started off as a p2 with none of the on-package cache. Remember the ol' celly 266 that OC'd to 450, and for some lucky ones 504? Well That was just the P2 card without the cache on the sides - the sides were empty.

    Pentium Pro also had the advantage of clock speed cache, whereas P2's cache was bus speed. But I digress. The article has an inconsistancy.
  • Brucmack - Monday, June 28, 2004 - link

    Well, you're not going to gain anything in the near future with PCIe, so if you already have an AGP card, don't bother.

    It would probably be a good idea to get a PCIe card if you're upgrading to the new Intel chipset though. The boards that have both PCIe and AGP slots are running the AGP slot off of the PCI bus, so there will be a slight performance penalty associated with that.
  • GhandiInstinct - Monday, June 28, 2004 - link

    So is AGP8x faster or better than PCIe, because that's what I got from those earlier benchmarks. Or will drivers and optimizations change that in the future?

    Basically, is it worth while(money) to purchase PCIe now?

Log in

Don't have an account? Sign up now