Inside Graphics Media Accelerator 900

There are plenty of new and improved features in GMA900 that deserve some in-depth explanation, but before we get to that, here is the specification list as provided by Intel on the Graphics Media Accelerator 900:

Third-generation Graphics Core
  • 256-bit graphics core
  • 8/16/32 bpp
  • Up to 8.5 GB/sec memory bandwidth
  • 1.3 GP/sec and 1.3 GT/sec fill rate
  • 224MB maximum video memory
  • 2048x1536 at 85 Hz maximum resolution
  • Dynamic Display Modes for flat-panel and wide-screen support
  • Operating systems supported: Microsoft Windows XP, Windows 2000, Linux-compatible (Xfree86 source available)
High-performance 3D
  • Up to 4 pixels per clock rendering
  • Microsoft DirectX 9 Hardware Acceleration Features:
  • Pixel Shader 2.0
  • Volumetric Textures
  • Shadow Maps
  • Slope Scale Depth Bias
  • Two-Sided Stencil
  • Microsoft DirectX 9 Vertex Shader 2.0 and Transform and Lighting supported in software through highly optimized Processor Specific Geometry Pipeline (PSGP)
  • DirectX Texture Decompression
  • OpenGL 1.4 support
Advanced Display Technology
  • 400 MHz DAC frequency for up to 2048x1526 resolution for both analog and digital displays
  • Two Serial Digital Video Out (SDVO) ports for flat-panel monitors and/or TV-out support via Advanced Digital Display 2 (ADD2) cards
  • Multiple display types (LVDS, DVI-I, DVI-D, HDTV, TV-out, CRT) for dual monitor capabilities
  • Hardware motion compensation support for DVD playback
  • HDTV 720p and 1080i display resolution support
  • 16x9 Aspect Ratio for wide screen displays
High Quality Media Support
  • Up and Down Scaling of Video Content
  • High Definition Content Decode
  • 5x3 Overlay Filtering
  • Hardware Motion Compensation support for DVD playback
The first item of note is that the GMA900 does not have any hardware vertex shader or transform and lighting capability. The vertex shader note in the specification list is actually a software feature. Intel's driver catches vertex shader or hardware geometry calls and performs them on the CPU.

There are a couple of reasons why we feel Intel lacks a hardware vertex shader and T&L. The first thing that came to mind was that Intel wants to sell higher powered processors h to those who want better geometry performance. Intel says that the performance increase wasn't worth the increase in cost and die size of their part. Both of these are return on investment issues, and Intel's choice does make sense from this perspective.

Of course, the flip side is that implementing hardware geometry and T&L has historically been difficult to implement alongside Intel's internal architecture. What makes Intel's architecture so different from everyone else's? We're glad that you asked.

Intel has a licensing agreement with STMicroelectronics. If that name sounds familiar, it's because STMicro was the company behind the Kyro and Kyro II. Those who've been following the graphics industry for a while will already have guessed that Intel is using their own flavor of STMicro's tile based rendering technology.

The main difference between immediate mode rendering (what most other GPUs implement) and tile based rendering is that tile based rendering eliminates the z-buffer and the need for framebuffer blending.

For every textured, lighted, and shaded object in a 3D scene, immediate mode rendering will start processing it as soon as possible: geometry lighting will be processed and pixels are assigned zbuffer values and textured, which are then rastered to the screen. Obviously, sometimes objects occlude other objects. In these cases, the zbuffer is used to determine what pixels is "closest" to the viewer and needs to be displayed.

Under tile based rendering, all geometry and lighting data is processed first and projected triangles are generated. The entire scene is sectioned off into tiles, and these tiles are successively rendered and drawn to the frame buffer. Having already processed all the geometry, a tile based renderer has all the information it needs to avoid drawing overlapping pixels (saving memory bandwidth) from triangles in the tile. Also, blending effects can be done easily before a pixel is written out to memory, saving still more memory bandwidth.

Working in tiles keeps the amount of data small enough to remain on the chip until all processing is done, which keeps the transistors busy, actually doing work rather than waiting for data to load from memory. This is especially important when working with a 6-8GB/s memory bus that's shared with the rest of the system. Modern add on graphics cards have over 30GB/s of memory bandwidth available to them in order to support all the reads and writes that are necessary in immediate mode rendering.

Of course, bandwidth is a commodity in immediate mode rendering as well, and Early and Hierarchical Z algorithms have helped NVIDIA and ATI perform similar large scale occluded pixel elimination by looking at tiles of geometry as output from the vertex shader before all the pixels on every object are sent to the pixel engine. But the efficiency of this is dependant on overlapping triangles being "near" each other in the vertex engine as all geometry is not present by the time triangles start hitting the pixel pipelines.

Our previous reviews of thePowerVR Kyro and the Kyro II have in-depth explanations of immediate mode and tile based rendering for those interested.

Other than DX9, the GMA900 has a 400MHz DAC, which supports higher resolutions on analog displays than previous generation Intel graphics. This is more beneficial in 2D applications, as pushing 3D games into the 1024x768 and beyond range is simply an exercise in frustration.

Index The Test
POST A COMMENT

18 Comments

View All Comments

  • IntelUser2000 - Tuesday, September 28, 2004 - link

    Well, I can't believe that Anandtech can do this. Look at the Intel Graphics drivers, which are 6.14.10.3756

    The latest is: 6.14.10.3889

    The difference in those two is that 3889 has support for GMA900, which will make a lot of difference in performance.
    Reply
  • LoneWolf15 - Friday, August 06, 2004 - link

    The reason I feel Intel has an obligation is far more than just because they are the #1 graphics chipset supplier in the world.

    Most systems that offer an IGP like the EG2 or GMA 9000 lack an AGP slot or a PCIe slot for a performance graphics card, so the manufacturer could save a few bucks. This absolutely dooms the entry-level user that knew little about graphics at the time of their computer purchase. It wouldn't bother me so much if users could easily replace the Intel solution with a top-flight graphics card, but in a lot of systems, this isn't the case. I feel that a current IGP should have close the power of a Geforce 4MX, or a Radeon 9200, hampered only by the shared-memory architecture, which will always hold these chips back. Either that, or there should be a clear upgrade path to a new graphics card. Seems business customers get the elevator in this case, but the home users get the shaft.

    Reply
  • T8000 - Tuesday, August 03, 2004 - link

    I think the Intel "Extreme" graphics are well worth their $3 premium, but they are meant for business desktops (and notebooks), so a review like this is just another way of saying you should buy an add-in graphics board for gaming.

    Just remember that more then half of the PC's sold will never see a 3d game, making such a graphics solution a great way to cut costs for those PC's.
    Reply
  • tfranzese - Tuesday, August 03, 2004 - link

    I am NOT inferring anything about performance. I am stating that the quality of the hardware is only as good as the software running it, which in this case are poor drivers. Intel's product is up there with the worst of them when it comes to proper drivers. In order for it to be a proper driver it should be able to render scenes properly as a customer would typically expect from such a big name.

    I havn't been criticizing performance, how fast it can render scenes. That point is easily assumed with it being an IGP. It doesn't excuse the fact though that the software is shite. And there is PLENTY in the article to support that indirectly with the screen captures and comments.

    Surprisingly the article fails to even cover Intel's driver and software presentation and integration.
    Reply
  • mikecel79 - Tuesday, August 03, 2004 - link

    #13 you are equating performance with reliabilty and quality. You don't say it, but it's infered. Here's an example maybe you can understand.

    A Civic is a well made reliable car using quality parts. It has amongst the highest quality in the inudstry. However it's performance is lacking in comparision to say a Corvette. Does that mean it is not a reliable, quality product? No of course not. So your statement is wrong. There is NOTHING in this article to show that Intel graphics are not reliable, and are of low quality. So it's not as fast as a add-in card. It doesn't make it a low quality or unreliable product.
    Reply
  • tfranzese - Tuesday, August 03, 2004 - link

    And, where did I mention the performance? It certainly is poor (and Trogdor, the only thing stupid is your comparison which is severely lacking), but if you actually read the article there are plenty of times poor image quality and errors are brought up. And this quote to be specific:

    "But we feel that being the number 1 supplier of graphics solutions in the world, Intel has a responsibility to uphold to the population of our small corner of the galaxy. By providing poor support for current technology to such a wide number of people, Intel is doing more harm than good. Obviously, there is a place for the GMA900, and we wouldn't be so hard on Intel if they could at least offer a performance based integrated solution for those who actually want compatibly, performance, and a good price point with their new system."

    And that's putting it nicely after seeing three of these benchmarks of some pretty popular games display incorrectly. If you people call that a 'realiable' and 'quality' product from a company who holds more than the majority of market share I'd have to suggest you all get your eyesight checked.
    Reply
  • tfranzese - Tuesday, August 03, 2004 - link

    IMO, the quality of the hardware is only as good as the software - and there is nothing quality about it, nor reliable in delivering what a customer might expect.

    Stupid comment, no. Has plenty of ground to stand on here.
    Reply
  • TrogdorJW - Monday, August 02, 2004 - link

    #2 - Yeah, this is the Intel quality and reliability, and as you can see from the article, it absolutely destroys the graphics solutions made by AMD. It's infinitely faster than those. :p (Sorry, but one stupid comment deserves another.)

    Given the cost of integrated graphics - about what, $5 extra, maybe $10? - it's not too surprising to see performance this low. I do have to agree that it's rather stupid to even include integrated graphics on the latest and greatest "high end" PCs, though. A server motherboard could add an old Rage 2c chip (like most already do), while for a desktop, I can't think of a good reason to get a 3.6E 775-LGA system that doesn't involve at least running an X300. Well, I can, and that reason is "marketing".

    Still, it's just one more way for Dell to screw over customers. I'm waiting for the Dell systems based off of the 915 and 925 chipsets to start shipping *without* the PCIe X16 ports. (Unless they're already available?)
    Reply
  • thebluesgnr - Monday, August 02, 2004 - link

    #7,

    There's no nVidia IGP for socket 754/939. I think they would rather have people buying discrete graphics (from them), so they will only release an IGP chipset when they "have to", that is, when SiS760 and VIA K8M800 gain some market share. Right now socket 754 is still expensive so integrated motherboards are not very popular.
    Reply
  • Marlin1975 - Monday, August 02, 2004 - link

    kmmatney, the only IGP for the A64 so far is the Unichrome pro from VIA, whihc is not meant for 3D at all, but has a built in mpeg decoder etc...
    There was really no reason to have IGP for the A64 as the chips USE to soct so much that buying one and trying to use a IGP is liek a EE P4 on a IGP board. But now that he socket 754 sempron and the price reduction of the 2800 and 3000, I am sure there will be a nForce IGP chipset coming soon.
    Reply

Log in

Don't have an account? Sign up now