Original Link: http://www.anandtech.com/show/960

It seems like almost every day we hear about another graphics company coming back from the dead with a killer new graphics chip set to put ATI and NVIDIA out of business. If there's anything the average AnandTech reader has learned over the past few years it is that talk is cheap and over promising but under delivering will grant you a one way ticket to exile. Thus it was no big surprise that we didn't get too excited when we heard that Trident was going to be making a grand return to the desktop graphics market with a GeForce4 competitor priced at less than $100 (USD).

Trident has never been known for producing high performance 3D accelerators, in fact they actually have a reputation that mirrors that of VIA's old character - as a large volume, low cost, low performance manufacturer. Trident was the company making the chips for the no-name "PCI Video Card" boxes you'd find for $29, they were also an OEMs best friend as they could deliver basic graphics functionality at a very low cost. When the 3D revolution picked up, Trident missed the bandwagon and it cost them a good deal of market share.

According to Mercury Research, as of Q4-2001 Trident held 4% of the desktop graphics market; but if you're doing anything but growing in a market then you'll quickly see that ownership erode, which is the driving force behind Trident's latest venture.

Obviously Trident isn't even going to try to go after the market that ATI's Radeon 9700 and NVIDIA's NV30 are aimed at, but once you also realize that those products will ship to less than 5% of the desktop market then it's clear that it wouldn't be in Trident's best interests to even begin to compete there. Trident is still a very large volume manufacturer so any endeavor they undertake would be one where they can deal with large quantities, mainly the sub-$100 graphics market.

If you sliced the desktop graphics market into 4 different sectors ( < $100, $100 - $200, $200 - $300, > $300) you'd realize that the largest volumes would be in the sub-$100 range, and that's exactly what Trident is targeting with their XP4.

The XP4 is Trident's first desktop DirectX 8.1 GPU that is supposed to offer 80% of the performance of a GeForce4 Ti 4600 at $99. We know, talk is cheap, but let's see if Trident is on to something even reasonably viable with this little chip.

A GeForce4 with 30M Transistors?

Let's have a quick look at the high-level specs of the XP4 before we get much further:

  • 0.13-micron GPU clocked at 250 - 300MHz
  • 30 million transistors
  • 4 pixel rendering pipelines, 2 texture units per pipeline
  • 2 programmable vect4 vertex shader pipelines
  • 64/128-bit DDR memory bus
  • up to 256MB of memory on board, clocked at 250 - 350MHz (500 - 700MHz DDR)
  • Tile-based rasterization engine
  • AGP 4X Support
  • Full DX8.1 Pixel and Vertex Shader Support with a base level of DirectX 9 support

As you can see from the above spec-list, the XP4 is a 0.13-micron chip, fabbed at UMC and it is only composed of 30 million transistors. To put things in perspective, the GeForce4 is a 63M transistor chip and the ATI Radeon 9700 is made of close to 110M transistors; the XP4 has a transistor count equal to that of the original Radeon and just slightly more than the GeForce2 GTS. Although the XP4 is built on a smaller process, that does nothing to reduce transistor count, so the transistor savings must come from somewhere else. The low transistor count is the key to Trident's ability to price XP4 graphics cards at under $100, but how do they achieve it?

Just looking at the number of vertex shaders and pixel rendering pipelines, you can tell that the GPU is very much like a GeForce4; they both have two vertex shaders units and four rendering pipes with two texture units per pipe. These two areas of the GPU take up the vast majority of the space (and thus transistor count) on a GPU, so it would make sense if Trident managed to decrease the XP4's transistor count by tinkering around here. It's unfortunately this area that Trident is the least forthcoming with information because of their desire to protect their IP that went into the production of the XP4, which is understandable but it limits what we can explain to you all.

Truly Four Rendering Pipelines?

The reason the XP4 is basically half the size of a GeForce4 comes down to Trident's implementation of their 4 pixel rendering pipelines. These four pipes make up the largest single part of the GPU by far and the more pipes you have, the greater the transistor count gets. ATI and NVIDIA outfit their GPUs with multiple pixel rendering pipelines by producing one and basically duplicating the logic multiple times until they reach the desired number of pipes. For example, if one pixel rendering pipeline on the GeForce4 took 15M transistors to implement then each subsequent pipeline would take 15M transistors.

Trident took a slightly different approach with the XP4; the first rendering pipeline of the XP4 is identical to any DX8.1 pixel shader pipeline on any present-day GPU. Using the example above, we can say that this pipe would be made up of around 15M transistors. However, instead of duplicating the logic three more times to implement the remaining pipes, Trident implements a large amount of resource sharing very early on in each of the subsequent pipelines. The end result is that each pipeline after the first one is less complex and shares some of the logic that went into the first pipeline.

Four pixels can still come out of the pipes every clock but, because of the resource sharing, the performance of the solution will never be as great as four completely independent pipelines as there is a reduction in parallelism with this design.

The obvious benefit of Trident's approach is a massively reduced transistor count, making the chip much more affordable to make. To give you an idea of the level of transistor savings, if you think of the first pipe requiring 15M transistors then the second would require only 7.5M, the third 3.8M and the fourth around 2M transistors. These are obviously rough estimates but they can give you a good idea of the transistor savings the XP4 is able to enjoy because of this technology.

The big question here is at what performance cost, which we won't be able to find out until we're able to do some extensive testing on the first XP4 solutions in a few weeks.

Although the majority of the transistor savings come from Trident's unique implementation of their four rendering pipelines, other things such as having no fixed function T&L help reduce transistor count as well; fixed function requests are internally converted to vertex shaders by the XP4.

The lower transistor count coupled with the 0.13-micron manufacturing process give the XP4 very low power consumption of between 3 - 4W on average. While this is still a bit higher than we'd like to see for mobile use, it's clear that Trident has the notebook market in mind as an application for the XP4.

The XP4 has long been rumored to be a tile-based rendering solution like STMicro's Kyro II and as intriguing as deferred rendering technologies are, you won't find any such technology in the XP4. Instead, the XP4 is a conventional immediate-mode renderer like the GeForce4 or Radeon 9700 but with a tile-based rasterization engine. All this means is that the XP4 uses a tile-based algorithm for storing pixels in its frame buffer; so instead of writing lines of pixel data to the frame buffer the XP4 writes the data in blocks/tiles. The XP4's tile-based rasterizer is much like Intel's 845G graphics core in this respect, and the main reason behind it is to optimize for the XP4's internal caches. The end result is improved memory bandwidth efficiency, which helps tremendously considering that the XP4 has no real occlusion culling technology.

Trident supports supersampling AA and anisotropic filtering with the XP4 but it's not clear how their unique architecture is impacted by enabling higher order filtering techniques or antialiasing.

The XP4 has a single 420MHz RAMDAC but it supports multiple displays through the use of a digital LCD or TV-out alongside a single analog monitor.

The XP4 Line

Trident is segmenting the XP4 line according to memory sizes, bus widths and clock speeds; the three members of the XP4 line are the T3, T2 and T1 and their specs are as follows:

The Trident XP4 T3 will run at a 300MHz core clock and come with 128MB of memory. The T3 has a 128-bit DDR memory bus and will be paired with 300 - 350MHz DDR memory (effectively 600 - 700MHz). This will give cards based on the T3 between 9.6GB/s and 11.2GB/s of memory bandwidth. Trident is expecting performance of the T3 to come within 80% of a GeForce4 Ti 4600. Retail graphics cards based on the T3 with 128MB of memory will be priced at $99.

The XP4 T2 is identical to the T3 except it only has 64MB of 250MHz DDR memory (effectively 500MHz) and has a 250MHz core clock. The reduced memory clock gives the T2 only 8GB/s of memory bandwidth. The T2 will retail for $79.

The XP4 T1 is identical to the T2 except it only has a 64-bit memory bus (cutting memory bandwidth in half) and will retail for $69.

Trident will be playing a NVIDIA-like role in the production of the XP4 in that they will only produce chips and will rely on 3rd-party manufacturers to produce cards. What's not too comforting is that Trident wasn't able to disclose any launch partners to us nor have we heard of any of the usual suspects producing boards based on the XP4 yet.

Trident expects 90% of their shipments to be of the T2 and T1 parts.

Final Words

Without hardware in hand, there's not much we can say about the XP4 other than we're just going to have to wait and see. Obviously there's much more to making a successful GPU than a nice set of specifications on paper, everything from yields to a solid driver team can impact how well a GPU actually makes it in the real world. Here's what we know about the XP4 as of now:

The XP4 will be on store shelves in October for the holiday buying season. Production silicon is currently in qualification, which should last another few weeks at the most. According to Trident, the driver is presently at 90% of their performance target (the 80% of a Ti 4600 mark). If Trident can attain their performance target, the XP4 T3 would be great competition for the GeForce4 Ti 4200 and the Radeon 9000 Pro at a much lower price, but we're understandably skeptical. Trident has also told us that they are committed to a 6-month product release schedule, and that in less than 6 months we should start hearing about the XP4's successor.

What remains to be seen is how Trident's unique shared pipeline architecture impacts performance in real-world tests and in the coming weeks, that's exactly what we plan on finding out. We'll keep you posted on how things do turn out with the XP4 once we get our hands on some working hardware…

Log in

Don't have an account? Sign up now