NVIDIA GeForce 64 MB

by Matthew Witheiler on April 6, 2000 7:27 AM EST

Result of GeForce Limitations



As suspected, raising the core clock speed as well as the memory clock speed results in an increase in frame rate. What the graphs above do a good job in showing, however, are how overclocking will increase your card differently depending on if you overclock the memory clock or the core clock. The core clock speed graphs show the FPS rate rising constantly at most resolutions when at 16-bit colors. The core clock speed increase does not seem to impact the frame rate as much when in 32-bit color mode, as can be seen by the fact that the graph of these increases produces data that is nearly horizontally linear.

When examining the graphs of memory clock increases, we find the opposite to be the case. At 16-bit colors, the memory clock increases result in small gain. The story changes, however, when 32-bit color mode is used. The data now suggests that increasing memory clock speed results in a constant rising frame rate at most resolutions, the same result that was found by increasing core speed in 16-bit colors.

The results of the above can easily be explained using the information about the GeForce given at the beginning of this section. When at the 16-bit color mode and clock speed is being increased, the GeForce GPU is accomplishing close to its theoretical fill rate. This is due to the fact that, at 16-bit color, a smaller amount of data is needed to be passed to the memory via the memory bus. Therefore, since the memory bus is not creating a bottleneck, data is free to flow from the GPU to the RAM and out to the display without many slowdowns due to the fact that the 300 MHz memory clock is providing enough bandwidth.

When core clock speed is increased while running in 32-bit color mode, the outcome is a bit different. Some performance is gained by overclocking the core here, but not only is the performance gain not as large as that experienced at 16-bit color mode, the performance gain seems to hit a limiting value (represented by a horizontal asymptote on the graph). This can be explained using our knowledge of the memory clock and bandwidth. Although the theoretical fill rate is being pushed out of the GPU, the memory bus cannot keep up with the massive amount of data it is receiving. The memory clock becomes a bottleneck simply due to the fact that 32-bit color requires twice as much temporary storage area as 16-bit color modes. Traveling at 300 MHz, the data can only move so fast into and out of the onboard RAM. This creates a bottleneck, a fact which explains the almost horizontal slope of the graph.

When the core clock is kept constant and the memory clock speed is raised, the results are swapped. We find that, at 16-bit color, overclocking the memory does not seem to make any significant difference in frame rate. This is because, as described in above, the GeForce was already meeting its theoretical fill rate because memory bandwidth was not an issue. Increasing the bandwidth by increasing the memory clock does not result in any appreciable increase in speed when at 16-bit colors because the GPU (at its stock setting) cannot fill the memory bandwidth available.

Tables turn, once again, when the color depth is changed from 16-bit to 32-bit. Unlike the results gathered by increasing core clock speed in 32-bit color, the speed of the DDR GeForce actually increases steadily when the memory clock is pushed up. Once again, this can be attributed to the memory bandwidth present at such high colors. Although the GeForce may be able to process sufficient information to keep up the 480 Million Pixels per Second, the memory bus under normal speeds cannot. By increasing the memory clock speed the memory bandwidth was also increased. This resulted in faster frame rates due to the fact that the bottleneck was widened. No longer is the massive amount of data that has to travel to the RAM forced to go at the speed provided by 300 MHz. Now information can move from core to the memory at a rate which does not limit the core fill rate. This fact is represented graphically via the constant increasing slope of the FPS graph when memory clock speed is increased with a constant core speed.

NVIDIA knew this would be a problem upon the release of the GeForce. It is for this reason that the GeForce comes in two models: SDR and DDR. The results of the above are from tests performed on a DDR card. Let's take a look at how SDR GeForce cards compare to DDR GeForce cards and explore how SDR cards would react to overclocking.

Limitations to the GeForce DDR vs. SDR
Comments Locked

1 Comments

View All Comments

  • Dr AB - Friday, May 8, 2020 - link

    Woww what an amazing article! There were a lot of things that I didnt even knew glad that I read it. Its fascinating how things have progressed in 20 years.

Log in

Don't have an account? Sign up now