DirectX 7 Performance

Below you can see our DirectX 7 based video processor chart:

GF2 GTS 200 333 4 2 0.5 128 1600 25 5081 100.0% 100.0% 100.0% 100.0%
DirectX 7
7500 290 460 2 3 0.5 128 1740 36 7019 108.8% 138.1% 145.0% 130.6%
GF4 MX460 300 550 2 2 0.5 128 1200 38 8392 75.0% 165.2% 150.0% 130.1%
GF2 Ultra 250 460 4 2 0.5 128 2000 31 7019 125.0% 138.1% 125.0% 129.4%
GF2 Ti 250 400 4 2 0.5 128 2000 31 6104 125.0% 120.1% 125.0% 123.4%
GF4 MX440 8X 275 500 2 2 0.5 128 1100 34 7629 68.8% 150.2% 137.5% 118.8%
7500 LE 250 360 2 3 0.5 128 1500 31 5493 93.8% 108.1% 125.0% 109.0%
GF4 MX440 275 400 2 2 0.5 128 1100 34 6104 68.8% 120.1% 137.5% 108.8%
GF2 Pro 200 400 4 2 0.5 128 1600 25 6104 100.0% 120.1% 100.0% 106.7%
7500 AIW 250 333 2 3 0.5 128 1500 31 5081 93.8% 100.0% 125.0% 106.3%
GF2 GTS 200 333 4 2 0.5 128 1600 25 5081 100.0% 100.0% 100.0% 100.0%
GF4 MX440 SE 250 333 2 2 0.5 128 1000 31 5081 62.5% 100.0% 125.0% 95.8%
Radeon DDR 183 366 2 3 0.5 128 1098 23 5585 68.6% 109.9% 91.5% 90.0%
GF4 MX4000 275 400 2 2 0.5 64 1100 34 3052 68.8% 60.1% 137.5% 88.8%
GF4 MX420 250 333 2 2 0.5 64 1000 31 2541 62.5% 50.0% 125.0% 79.2%
Radeon LE 148 296 2 3 0.5 128 888 19 4517 55.5% 88.9% 74.0% 72.8%
GF2 MX400 200 166 2 2 0.5 128 800 25 2541 50.0% 49.8% 100.0% 66.6%
Radeon SDR 166 166 2 3 0.5 128 996 21 2533 62.3% 49.8% 83.0% 65.0%
7200 183 183 2 3 0.5 64 1098 23 1396 68.6% 27.5% 91.5% 62.5%
GF2 MX 175 166 2 2 0.5 128 700 22 2541 43.8% 49.8% 87.5% 60.4%
GeForce 256 DDR 120 300 4 1 0.5 128 480 15 4578 30.0% 90.1% 60.0% 60.0%
GF2 MX200 175 166 2 2 0.5 64 700 22 1266 43.8% 24.9% 87.5% 52.1%
GeForce 256 SDR 120 166 4 1 0.5 128 480 15 2533 30.0% 49.8% 60.0% 46.6%
7000 AGP^ 183 366 1 3 0 64 549 0 2792 34.3% 55.0% 0.0% 29.8%
7000 PCI^ 166 333 1 3 0 64 498 0 2541 31.1% 50.0% 0.0% 27.0%
Radeon VE^ 183 183 1 3 0 64 549 0 1396 34.3% 27.5% 0.0% 20.6%
* RAM clock is the effective clock speed, so 250 MHz DDR is listed as 500 MHz.
** Textures/Pipeline is the maximum number of texture lookups per pipeline.
*** Nvidia says their GFFX cards have a "vertex array", but in practice it generally functions as indicated.
**** Single-texturing fill rate = core speed * pixel pipelines
+ Multi-texturing fill rate = core speed * maximum textures per pipe * pixel pipelines
++ Vertex rates can vary by implementation. The listed values reflect the manufacturers' advertised rates.
+++ Bandwidth is expressed in actual MB/s, where 1 MB = 1024 KB = 1048576 Bytes.
++++ Relative performance is normalized to the GF2 GTS, but these values are at best a rough estimate.
^ Radeon 7000 and VE had their T&L Engine removed, and cannot perform fixed function vertex processing.

Now we're talkin' old school. There are those people in the world that simply can't stand the thought of having less than the latest and greatest hardware on the planet in their PC, and then there are people that have social lives. Okay, it's not that bad, but not everyone needs a super powerful graphics card. In fact, there are plenty of businesses running computers with integrated graphics that would be thoroughly outclassed be even the five year old GeForce 256. If you're only playing older 3D games or just want to get the cheapest non-integrated card you can find, DX7 cards fit the bill. A Home Theater PC that plays movies has no need for anything more, for instance. Or maybe you have a friend that's willing to just give you his old graphics card, and you want to know if it will be better than the piece of junk you already have? Whatever the case, here are the relative performance figures for the DX7 era cards.

No special weighting was used, although with this generation of hardware you might want to pay closer attention to memory bandwidth than the other areas. Fill rate is still important as well, but vertex fill rate is almost a non-issue. In fact, these cards don't even advertise vertex rates - they were measured in triangle rates. Since they had a fixed-function Transform and Lighting (T&L) pipeline, triangles/sec was the standard unit of measurement. The vertex pipelines are listed as "0.5" for the DX7 cards, emphasizing that they are not programmable geometry processors. As luck would have it, 0.5 times clock speed divided by 4 also matches the advertised triangle rates, at least on the NVIDIA cards. Vertex rates are anywhere from two to four times this value, depending on whether or not edges are shared, but again these rates are not achievable with any known benchmark. One item worth pointing out is that the Radeon 7000 and VE parts have had their vertex pipeline deactivated or removed, so they are not true DX7 parts, but they are included as they bear the Radeon name.

Early adopters of the DX7 cards were generally disappointed, as geometry levels in games tended to remain relatively low. First, there was a demo called "Dagoth Moor Zoological Gardens" created for the launch of the original GeForce 256. It was created by a company called "The Whole Experience" and used upwards of 100,000 polygons. Unfortunately, they never released any commercial games using the engine (at least, none that we're aware of). Later, a different company at the launch of the GeForce 2 created a demo that had millions of polygons to show off the "future of gaming" - that company would eventually release a game based off of their engine that you might have hear of, Far Cry. Actually, Crytek Studios demoed for both the original GeForce 2 launch and the GeForce 3 launch. They used the same engine and the demo name "X-isle" was the same as well, but the GF3 version added support for some pixel shader and vertex shader effects. Four years after demonstrating the future, it finally arrived! Really, though, it wasn't that bad. Many games are in development for several years now, so you can't blame them too much for delaying. Besides, launching a game that only runs with the newest hardware is tantamount to financial suicide.

As far as performance is concerned, the GeForce2 was the king of this class of hardware for a long time. After the GeForce 3, NVIDIA revisited DX7 cards with the GF4MX line, which added additional hardware support for antialiasing and hardware bump mapping. While it only had two pixel pipelines in comparison to the 4 of the GF2, the higher core and RAM speeds generally allowed the GF4MX cards to match the GF2 cards, and in certain cases they beat it. The Radeon 7500 was also a decent performer in this class, although it generally trailed the GF2 slightly due to the 2x3 pixel pipeline, which could really only perform three texture operations if two of them came from the same texture. Worthy of mention is the Nforce2 IGP chipset, which included the GF4MX 440 core in place of the normally anemic integrated graphics most motherboards offer. Performance was actually more like the GF4MX420, due to the sharing of memory bandwidth with the CPU and other devices, but it remains one of the fastest performing integrated solutions to this day. Many cards were also crippled by the use of SDR memory or 64-bit buses - we still see such things with modern cards as well, of course. Caveat emptor, as they say. If you have any interest in gaming, stay away from 64-bit buses, and these days even 128-bit buses are becoming insufficient.

Bring on the Crazy Eighty Eight! Is it smaller than a bread box?
Comments Locked

43 Comments

View All Comments

  • Neo_Geo - Tuesday, September 7, 2004 - link

    Nice article.... BUT....
    I was hoping the Quadro and FireGL lines would be included in the comparison.
    As someone who uses BOTH proffessional (ProE and SolidWorks) AND consumer level (games) software, I am interested in purchasing a Quadro or FireGL, but I want to compare these to their consumer level equivalent (as each pro level card generally has an equivalent consumer level card with some minor, but important, otomizations).

    Thanks
  • mikecel79 - Tuesday, September 7, 2004 - link

    The AIW 9600 Pros have faster memory than the normal 9600 Pro. 9600 Pro memory runs at 650Mhz vs the 600 on a normal 9600.

    Here's the Anandtech article for reference:
    http://www.anandtech.com/video/showdoc.aspx?i=1905...
  • Questar - Tuesday, September 7, 2004 - link

    #20,

    This list is not complete at all, it would be 3 times the size if it was from the last 5 or 6 years. It covers about the last 3, and is laden with errors

    Just another exampple of half-asssed job this site has been doing lately.
  • JarredWalton - Tuesday, September 7, 2004 - link

    #14 - Sorry, I went with desktop cards only. Usually, you're stuck with whatever comes in your laptop anyway. Maybe in the future, I'll look at including something like that.

    #15 - Good God, Jim - I'm a CS graduate, not a graphics artist! (/Star Trek) Heheh. Actually, you would be surprised at how difficult it can be to get everything to fit. Maximum width of the tables is 550 pixels. Slanting the graphics would cause issues making it all fit. I suppose putting in vertical borders might help keep things straight, but I don't like the look of charts with vertical separators.

    #20 - Welcome to the club. Getting old sucks - after a certain point, at least.
  • Neekotin - Tuesday, September 7, 2004 - link

    great read! wow! i didn't know there were so much GPUs in the past 5-6 years. its like more than all combined before them. guess i'm a bit old.. ;)
  • JarredWalton - Tuesday, September 7, 2004 - link

    12/13: I updated the Radeon LE entry and resorted the DX7 page. I'm sure anyone that owns a Radeon LE already knows this, but you could use a registry hack to turn them into essentially a full Radeon DDR. (By default, the Hierarchical Z compression and a few other features were disabled.) Old Anandtech article on the subject:

    http://www.anandtech.com/video/showdoc.aspx?i=1473
  • JarredWalton - Monday, September 6, 2004 - link

    Virge... I could be wrong on this, but I'm pretty sure some of the older chips could actually be configured with either SDR or DDR RAM, and I think the GF2 MX series was one of those. The problem was that you could either have 64-bit DDR or 128-bit SDR, so it really didn't matter which you chose. But yeah, there were definitely 128-bit SDR versions of the cards available, and they were generally more common than the 64-bit DDR parts I listed. The MX200, of course, was 64-bit SDR, so it got the worst of both worlds. Heh.

    I think the early Radeons had some similar options, and I'm positive that such options existed in the mobile arena. Overall, though, it's a minor gripe (I hope).
  • ViRGE - Monday, September 6, 2004 - link

    Jarred, without getting too nit-picky, your data for the GeForce 2 MX is technically wrong; the MX used a 128bit/SDR configuration for the most part, not a 64bit/DDR configuration(http://www.anandtech.com/showdoc.aspx?i=1266&p... Note that this isn't true for any of the other MX's(both the 200 and 400 widely used 64bit/DDR), and the difference between the two configurations has no effect on the math for memory bandwidth, but it's still worth noting.
  • Cygni - Monday, September 6, 2004 - link

    Ive been working with Adrian's Rojak Pot on a very similar chart to this one for awhile now. Check it out:

    http://www.rojakpot.com/showarticle.aspx?artno=88&...
  • Denial - Monday, September 6, 2004 - link

    Nice article. In the future, if you could put the text at the top of the tables on an angle it would make them much easier to read.

Log in

Don't have an account? Sign up now