Original Link: http://www.anandtech.com/show/632
Linux Video Card Comparison - October 2000by Jeff Brubaker on October 9, 2000 12:10 AM EST
- Posted in
With latest round of distributions, XFree86 4.0.1 has finally become the standard X server for Linux, up the 3.3.x series that dominated for quite some time. With 4.0.1 come many substantial improvements such as a completely rewritten XAA (X Acceleration Architecture) for 2D acceleration, Xinerama (contiguous multi-monitor support) and DRI (Direct Rendering Infrastructure - hardware accelerated OpenGL). Finally, Linux users have an extremely fast, free X server that can compete with commercial offerings such as Xi Graphics' Accelerated-X.
Hardware manufacturers have taken note of both the recent surge in Linux popularity and the revamping of XFree86. In the not too distant past, manufacturer support was considered good if driver specifications were available with which to write drivers. Now, several vendor supported XFree86 4.0.x drivers have been released or are in development by such manufacturers as NVIDIA, Matrox, 3dfx, and ATI. There are others, but these are the manufacturers we will focus on for this article.
Judging from the response to the last article, the number one question in Linux users' minds is "How do I get my card working with XFree86 4.0.1 and take advantage of such features as 3D acceleration?" Thus, this article will not only cover performance information, but setup as well. Anyone with one of these cards should be able to have a properly setup XFree86 system after reading this article.
DRI refers to the technologies surrounding OpenGL acceleration under Linux. It is X-Windows specific and thus requires an X server even for full screen games. Typically, drivers consist of a kernel module that provides protected access to the hardware as well as an XFree86 driver module. One nice thing about XFree86 drivers is that they are operating system agnostic. However, they are still architecture specific. The driver loader was donated by Metrolink during the development of XFree86 4.0. Being operating system-independent, they can work on FreeBSD, Solaris or any other operating system running XFree86. This does not mean that 3D acceleration will be supported, as the previously mentioned kernel module is typically required.
Another interesting technology behind our newfound 3D acceleration is that we also got SGI's GLX module, which allows for remote display of OpenGL applications, transparently, like any other X application. While remote display with hardware acceleration is not currently supported by DRI, it is on the list of future features to be added. NVIDIA's drivers (which do not use DRI, but a similar model) are said to already support it, though we did not test this ourselves.
Also, note that both XFree86 and DRI provide CVS access to development code that is often faster or more feature-rich than released drivers (currently in XFree86 4.0.1). For example, 32bpp rendering is now supported by the Matrox driver in DRI's CVS tree (though it is not supported by any of the three released drivers on Matrox's web page).
All tests were conducted in 16bpp for the simple reason that neither the Matrox released subdrivers (the Matrox drivers in DRI's CVS contains drivers that contain support) nor the Intel 815 drivers support 32bpp 3D acceleration at this time. According to DRI's web page, the 815 will never support 32bpp rendering due to hardware limitation.
Note that it is possible to get hardware accelerated OpenGL with XFree86 3.3.x, the previous major version via Utah-GLX. Utah-GLX is not as clean of a solution as DRI for the simple reason that without kernel modules, one must run as root to get hardware acceleration. Still, it is very fast and supports the ATI Rage Pro cards, which are not supported by XFree86. It also supports the G200/G400 cards and is somewhat faster than XFree86/DRI at this time due to the current lack of multi-texturing support in the DRI driver. NVIDIA cards are supported as well, but without hardware specs, it is impossible to support DMA or AGP transfers, thus greatly limiting their speed. Finally, the Intel 810 is supported as well.
XFree86 Setup: General
As mentioned before, a large number of the responses to the previous article dealt with configuration issues. We listened, and here's the general setup for those of you interested in playing with DRI. Also, please read the information on DRI's web page, as it's extremely useful when setting these cards up.
Note that this setup may not be necessary, as many distributions are including XFree86 4.0.1 and associated configuration utilities. The two that come to mind are Red Hat 7.0 and SuSE 7.0, but others may as well. These tests were conducted on Red Hat 7.0, which was intelligent enough to automatically configure several cards without manual intervention. Of course, we modified some parameters and installed more recent drivers for the NVIDIA and Matrox cards. Still, in time the installation procedure will get easier and easier.
To start, you need a basic /etc/X11/XF86Config file (Red Hat 7.0 uses /etc/X11/XF86Config-4 so as to allow support for both XFree86 3.3.6 and 4.0.1). Please note that the following file is only intended as a base point. It is very likely that you will need to modify paths or monitor settings. Note that my monitor accepts a wide range of frequencies. CHANGE THESE OR YOU MAY CAUSE DAMAGE TO YOUR MONITOR. You have been warned! Here is a basic file to get started:
Section "Files" RgbPath "/usr/X11R6/lib/X11/rgb" FontPath "/usr/X11R6/lib/X11/fonts/local/" FontPath "/usr/X11R6/lib/X11/fonts/misc/" FontPath "/usr/X11R6/lib/X11/fonts/75dpi/:unscaled" FontPath "/usr/X11R6/lib/X11/fonts/100dpi/:unscaled" FontPath "/usr/X11R6/lib/X11/fonts/Type1/" FontPath "/usr/X11R6/lib/X11/fonts/CID/" FontPath "/usr/X11R6/lib/X11/fonts/Speedo/" FontPath "/usr/X11R6/lib/X11/fonts/75dpi/" FontPath "/usr/X11R6/lib/X11/fonts/100dpi/" FontPath "/usr/X11R6/lib/X11/fonts/TrueType/" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" Load "dri" EndSection Section "ServerFlags" Option "blank time" "10" # 10 minutes Option "standby time" "20" Option "suspend time" "30" Option "off time" "60" EndSection Section "InputDevice" Identifier "Keyboard" Driver "keyboard" EndSection Section "InputDevice" Identifier "Mouse" Driver "mouse" Option "Device" "/dev/mouse" Option "Protocol" "PS/2" EndSection Section "Monitor" Identifier "Monitor" HorizSync 30-95 VertRefresh 50-150 Option "dpms" EndSection Section "Device" Identifier "Video Card" Driver "mga" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Screen 1" Device "Video Card" Monitor "Monitor" DefaultDepth 16 SubSection "Display" Depth 8 Modes "1280x1024" "1024x768" "800x600" "640x480" EndSubSection SubSection "Display" Depth 16 Modes "1280x1024" "1024x768" "800x600" "640x480" EndSubSection SubSection "Display" Depth 24 Modes "1280x1024" "1024x768" "800x600" "640x480" EndSubSection EndSection Section "ServerLayout" Identifier "Simple" Screen "Screen 1" InputDevice "Mouse" "CorePointer" InputDevice "Keyboard" "CoreKeyboard" EndSection
XFree86 Setup: Card Specific Setup
First, if you're running Red Hat 7.0, the included libGL.so files are actually from Mesa 3.3. This probably shouldn't be an issue since the files that usually come with XFree86 4.0.1 are also based on Mesa, but we experienced problems with some cards and had to manually replace the files with those from XFree86's FTP site.
Actually verifying that you have 3D acceleration enabled properly can be tricky. Keep an eye on your /var/log/XFree86.0.log file and make sure you see a line that says "direct rendering enabled." The DRI page has a lot of useful information and tools for verifying 3D acceleration.
For most cards, the only thing that needs to be modified is the Driver parameter of the Device section. Here's the rundown:
NVIDIA cards: use nv to use the driver that comes with XFree86 4.0.1 (support for cards through the first GeForce cards only), or nvidia to use the drivers from NVIDIA's web site (support for GeForce2 cards as well). The driver difference has led to quite a bit of confusion. Much of this is due to the following excerpt from the NVIDIA driver FAQ:
Note: currently released versions of XFree86 4.0.x DO NOT support the newer GeForce2 family of NVIDIA cards. This includes cards such as the GeForce2 GTS, the GeForce2 MX, and the GeForce2 Ultra. This has been fixed in XFree86's CVS repository, but will take a new release before binary distributions pick the support up. If you have one of these cards, you will need to skip attempting to run the "nv" driver at this point.
This does not mean that GeForce2 cards do not work with XFree86 4.0.x. It simply means that the drivers shipped with all current releases of XFree86 won't work with GeForce2 cards. The NVIDIA drivers are easy enough to set up, so don't give up yet. Download the drivers from NVIDIA and install. The tricky part is that they ship different versions of libGL.so and libglx.a (the GLX module). Make sure you remove the previous files so as to avoid accidentally running a program with the wrong library (which would result in no hardware acceleration). XFree86's reside in /usr/X11R6/lib, while libglx.a is in /usr/X11R6/lib/modules/extensions. Note that Red Hat 7.0 includes them in /usr/lib with the Mesa-3.3 package, as mentioned above.
If you're running *BSD or another non-Linux operating system supported by XFree86, you are unable to use NVIDIA's drivers. Try the latest code in XFree86's CVS repository. This code is unstable development code, but we've found that it's generally far from troublesome. It should work with even GeForce2 based cards, but will not provide the performance you see in this article. NVIDIA does not release enough information about their card to even utilize DMA transfers.
The XF86Config file listed earlier is actually for a Matrox card, so there is no modification necessary. However, we recommend that you upgrade your driver to the one distributed on Matrox's web site. This driver provides Dual Head functionality as well as improved OpenGL acceleration. If the "beta" status of Matrox's driver strikes you as too stable, DRI's driver (in CVS) adds support for 32bpp rendering.
For Rage 128 based cards, simply change the Driver line to use "r128". Note that there are hundreds of different ATI cards based on the Rage 128 and they can sometimes identify themselves differently. We've seen reports of people unable to get the drivers to work simply because the card isn't listed as supported. If this happens, you can always try the CVS code from DRI or XFree86.
Radeon drivers are in development but have not been released yet. The development of these drivers is said to also result in higher quality drivers for Rage 128 based cards. Rage Pro cards are not supported (from a 3D acceleration standpoint) at this time, although development is in progress. Support will likely be based on work done on the Utah-GLX project for XFree86 3.3.x. In the meantime, Rage Pro users may use XFree86 3.3.6 along with Utah-GLX and achieve good (for the card) performance.
According to the DRI User Guide, Rage 128 Pro cards are not currently supported. As the driver is being reworked for Radeon and Rage128 cards, we would assume that support will be added when this new driver is available, although we have not checked this with the DRI developers.
For all Voodoo cards, use driver "tdfx" in the Driver line. Unfortunately, Red Hat 7.0 fails to include this driver and the corresponding libraries it requires. If you already have XFree86 4.0.1 installed and have the correct libglide3x.so.3 for your card, you can skip the following two paragraphs.
Driver support can actually be very tricky for these cards as they require both drivers and specific versions of the GLIDE library. If you have a Voodoo3 or Voodoo Banshee based card, see 3dfx's driver page located here. If you have a Voodoo5 based card, see 3dfx's driver page located here.
Support for the Voodoo5 is fairly recent and only one CPU of the two on-board CPUs is supported. Keep that in mind when reviewing these benchmarks, as scores will improve dramatically as the driver progresses. It is expected that this limitation will be resolved in the future.
Also note that by default, the 3dfx driver caps frame rate to the monitor's refresh. For the purposes of benchmarking, we disabled this by defining the FX_GLIDE_SWAPINTERVAL environment variable to 0.
The on-board video on the Intel 815 chipset is supported by the i810 driver. Change the Driver line of the Device section of your XF86Config file to reference this driver. You must have the agpgart.o kernel module for this to work, so check /lib/modules/ for that file. Our test machine running Red Hat 7.0 came ready to go.
Red Hat Linux 7.0 Test System
|CPU(s)||Intel Pentium III 750 (100MHz
FSB on AX6BCPro)
Intel Pentium III 733 (133MHz FSB on CUSL2)
|Memory||128MB PC133 Corsair SDRAM (Micron -7E Chips)|
Western Digital Expert WD 313600 13.6 GB 7200 RPM Ultra ATA 66
GeForce2 GTS 32MB
Linksys LNE100TX 100Mbit PCI Ethernet Adapter
Red Hat Linux 7.0
2D Performance: Xmark Results
We learned a thing or two about 2D benchmarks in our last article. First, we learned that Xmark is particular about using data from v1.3 of x11perf. As we were using v1.5, the 2D benchmarks we posted were incorrect. As it did not change the outcome, we decided to just put the new results in this article rather than updating the previous. This information was relayed by an XFree86 hacker, so we feel pretty confident we've got it right this time.
Xmark is a comprehensive benchmark for X-Windows systems. The process is to run an x11perf benchmark that tests a multitude of separate tests lasting for several hours. In the end, Xmark is used to parse the data into a single score, an Xmark. This is the current standard 2D performance measurement for X-Windows.
One thing of note, the 3dfx cards both caused X to hang while running this benchmark. It is a known bug in XFree86 4.0.1 and will be resolved in XFree86 4.0.2. If you're curious, here is the explanation we received:
Just FYI, what's happening is this. We create a command buffer on the card. It's just a chunk of memory where we write small chunks of data that tell the card what to do. When that buffer fills up we have to wait for the card to execute the stuff in the buffer before we can put more stuff in it.
The problem is that x11perf isn't a usual app. It executes the exact same command over and over. It fills up the buffer with them. We then have to wait for it to drain. Normally that isn't the problem because it is a mix of commands and they take reasonable amounts of time, but when x11perf wants to execute a slow command, it takes too long for the buffer to empty and the driver thinks the board is hung.
The solution is to run the test for a shorter time period (cutting the several hour test in half, roughly). The extra parameters we used were -repeat 2 -time 2.
As in the last article, the NVIDIA cards dominated. After publishing that article, we received an e-mail from the original developer of the Matrox driver. He mentioned that at the time he wrote the driver, 2D support did not utilize any features that would require kernel support, such as AGP and DMA transfers. We have yet to receive confirmation from Matrox, but it appears as if little or no work has been done in this area since the XFree86 release. We hope that Matrox can improve their 2D performance in forthcoming driver releases. Why no AGP/DMA usage in 2D? As these features require kernel support, they must be written for each platform individually. Our guess is that the module was designed to be as fast as possible while maintaining complete portability. Adding support for AGP and DMA is an operating system specific task.
3D Performance: Quake III Arena
The de facto standard for consumer 3D benchmarking is Quake III Arena for the simple reason that it is the most widely used 3D application by the average user. In fact, drivers are often criticized for being optimized only for Quake, particularly by non-gamers. Yes, 3D is useful for other things than gaming. Regardless, Quake III makes for an excellent 3D benchmark due the fact that it still drives sales of modern 3D cards.
Unmatched, the NVIDIA cards simply dominate Quake III performance. Note the bandwidth limiting of the GeForce2s at lower resolutions, they're simply not challenged. The Voodoo5 5500 is held back by the fact that its drivers only support one CPU at this time. With the upcoming dual-CPU Voodoo5 driver release and the soon to be released Radeon drivers, maybe we'll see a card get somewhere close to the GeForce2s in performance. The G400 and Rage128 round out third place with very playable performance. Intel's 815 chipset on-board video shows its age (it's based on an updated version of the Intel i740 card from long ago) and comes in dead last.
We noticed no rendering errors or other graphic glitches with any of these cards. Once again, drivers are often engineered for Quake III, and this is likely a result of that fact.
A note on Viewperf
We had also intended to include benchmarks from the cross platform OpenGL benchmark, SPECviewperf. Unfortunately, it seems as though the only drivers mature enough to support this test are the NVIDIA drivers. All other cards failed somewhere in the middle of any of the tests, and thus the results are unreliable. Viewperf is known for being one of the most demanding OpenGL benchmark applications out there. It is intended to give results corresponding to professional research applications where high-end cards dominate.
3D Performance: Evas_test
Once again, we've included benchmarks from Evas, the future rendering backend for Enlightenment and EFM. This version is actually newer than the one used in the previous article, hence the differences in scores. It has since been renamed to evas_test_old as it is being replaced by a new test utility. The newer tool allows easy comparison of different rendering models (OpenGL, straight Xlib, and software rendered Imlib2) while the older tool, used here, can be more easily applied as a benchmark used to test the texturing throughput of a card and its drivers. The proliferation of this library will allow even the non-gamers to utilize the 3D acceleration of their cards in every day applications.
Here's a screen shot of evas_test_old running (in software rendered mode, hence the low frame rate). Lots of floating icons, rotating gradients and alpha blending all over the place.
Once again, the GeForce2s come out on top. The more economical GeForce2 MX shows where its crippled memory bandwidth as the resolutions get higher and fill rate becomes an issue, at least in comparison to its big brother, the GeForce2 GTS. The G400 MAX followed in a distant third barely outperforming the Voodoo5. Once again, the Intel 815 comes in dead last, but at least it outperforms Imlib2's software rendering.
Unlike the Quake III results, visual quality was an issue here. Unfortunately, it is not possible for us to determine if it is a specific rendering option used by Evas or if it is a result of driver or hardware quality. Thus, don't let this sway you in believing it's a driver issue so much as it could possibly be a driver issue.
The worst of the bunch, the Intel 815 incorrectly rendered both a solid filled, semi-transparent rectangle and all floating text, which was rendered as if the alpha (transparency) values were being ignored. Better, the ATI Rage128 properly rendered all floating text, but also misrendered the rectangle. In both cases and with both 3dfx cards, there was significant color banding in the background. Granted, this was 16bpp, but the Matrox and both NVIDIA cards looked fine.
Here, we can see the difference between the Voodoo3's rendering (to the left) and what it should look like (to the right). Notice the distinct separation of color in the background. The G400 and both GeForce2 cards did not demonstrate this artifact.
The Future: Next Generation Hardware
In the past, Linux users had to watch new hardware come out to the market with the realization that it would not be supported immediately, if ever. Oh the agony, to watch a Windows user enjoying the benefits of his new 3D card knowing you'd be stuck with your older card! Thankfully, with the amazing performance of VA Linux's and Red Hat's IPOs along with the media frenzy over the last year, we have drivers for next generation hardware much faster. Here's a quick look at what's to come:
Open-source advocates caution that NVIDIA's use of closed-source drivers could leave the Linux community high and dry if NVIDIA ever grows tired of supporting us. For the moment anyway, we have the benefit of very good drivers. Further, their efforts to unify the code between their Windows and Linux drivers assures porting will be easy and drivers should be available rapidly for future products. We have also heard rumors that the current drivers may actually support several unreleased/unannounced chips. This would certainly be good news, but we are unable to confirm this at this time.
The closed-source vs. open-source driver development debate is fierce in the Linux community, which shuns any code distributed in a binary-only form. To Windows users, this seems silly, but there are several reasons that open source driver development is a very good thing. It all boils down to the ability to modify or enhance drivers in the event that the supporting company is not providing important features. For example, NVIDIA must add support for new X extensions and other enhancements themselves. While XFree86's drivers (in CVS) are starting to support the new Render extension (support for alpha-blending between windows and anti-aliased fonts), NVIDIA will have to implement this themselves. As it's one of many cool new X extensions, we hope that they are following its development and plan to include it in a future release of their drivers. Open sourced drivers allow for other developers to add support for such features themselves. In this case, it would likely be implemented by an XFree86 developer. Further, by keeping the drivers closed source, NVIDIA restricts the ability of the community to port to other operating systems such as FreeBSD. Typically, companies are blacklisted by Linux users for restricting open source development. However, NVIDIA's drivers are currently very good and their hardware is excellent, so many users swallow their pride and live with it.
We asked DRI's development team about possible driver development for the G800 card. Here's their response:
We can't talk about unreleased products at all. When we do work on a card, we have an agreement with the vendor that defines when we can talk about the work. We always try to announce the work as early as possible, (to avoid redundant work) but it is always up to the vendor to decide exactly when to announce it.
So, basically, we didn't find out anything. Given Matrox's track record, drivers are more or less guaranteed, but it would have been nice to know that they're already in development.
As the Voodoo5 5500 worked, we would assume that the 6000 should not have any serious issues preventing it from working out of the box, though full support will likely come at a later date since the Voodoo5's dual CPU setup is currently restricted to using a single CPU. Historically, 3dfx has been a good supporter of open-source development work and releasing Linux drivers. We see no reason to expect otherwise.
The recently released Radeon is not currently supported by XFree86 4.0.x. Like other manufacturers, ATI has contracted Precision Insight (developers of DRI for XFree86) to develop Radeon drivers for XFree86 4.0.x. We have not received any word on their progress or estimated time frame of release, although we would expect that it is not far off.
Note that the new Radeon drivers involve a major update to the current Rage 128 driver architecture as well. Thus, not only will Radeons be supported in the near future, but the Rage 128 cards will receive a nice performance boost by the new driver.
Again, we see the NVIDIA cards dominating in both 2D and 3D performance under Linux. NVIDIA has worked hard on their drivers and they are proud of their results, as they should be. Their performance edge makes it difficult to justify another card on the basis of open-source drivers alone. Many users still feel burned by their schizophrenic attitude towards TNT drivers under Linux, which keeps them away from NVIDIA's latest offerings. For those interested, after announcing open source drivers, they obfuscated the code before releasing. Then the drivers sat, unmodified, unimproved and unsupported. Finally, they released their revolutionary new driver architecture (as used here with the GeForce2 cards), which supports the TNT/TNT2 as well as their GeForce and GeForce2 cards.
Still, there are other features on the other cards that may interest other users. Matrox supports both TV and flat panel out on their G400/G450 cards as well as dual head output in their drivers. It should be noted that even this support (Dualhead and TV/DVI output) is closed-source, as opening it would violate an NDA between Matrox and Macrovision.
Further, the ATI All-In-Wonder cards with TV input are also supported under Linux, though unofficially, by the GATOS project.
Finally, Linux Games has just released a very in-depth review of several video cards and their corresponding gaming performance under Linux and Windows. They cover gaming performance for specific games and discuss differences in driver support between Windows and Linux.
· Matrox - Manufacturers of the Millennium G400Max and G450
· NVIDIA - Manufacturers of the GeForce2 MX
· Red Hat - Linux distribution used in these tests
· XFree86 - The X11R6.4 implementation used for these tests
· DRI - The direct rendering implementation used by XFree86
· Linux3D - Web site with general Linux 3D information
· GLX - The X extension that allows for OpenGL acceleration even remotely
· Precision Insight - Initial designers of DRI, site includes a lot of DRI information
· GATOS - The GATOS project, support for TV input on ATI cards
· Render - The X extension in development allowing for anti-aliased fonts and alpha blending between windows
· SPEC - Creators of SPECviewperf, which was supposed to be used for 3D benchmarking in this article
· Enlightenment - Window manager and home of Evas, used for 3D benchmarking in this article