Toying with Theory (continued)

Next we have VillageMark, a benchmark originally developed by the folks at PowerVR to show off the benefits of a deferred based rendering system. In order to do this there is an extraordinary amount of overdraw (rendering of pixels that aren't displayed to the screen) which doesn't affect cards like the Kyro II since hidden pixels are not rendered at all.

VillageMark has the potential to be a great measure of the efficiency of things such as ATI's HyperZ II and NVIDIA's Visibility Subsystem but in order to make sure that it's a good benchmark of occlusion culling subsystems (discarding unused pixels) we tried disabling HyperZ II on the Radeon 8500 to see what sort of a performance hit the card took under VillageMark:

Radeon 8500 - Z Occlusion Culling Performance
VillageMark - 1024x768x32
HyperZ II Enabled

HyperZ II Disabled

113

59

|
0
|
23
|
45
|
68
|
90
|
113
|
136

It's clear that VillageMark would serve just fine as a HyperZ-Mark considering the 91% increase in performance seen when enabling HyperZ II is noticeably greater than the real world performance we saw resulting from enabling HyperZ on the old Radeon from way back when. Although the "performance drivers" had little effect on the real world tests we should mention that these scores (taken with the "performance drivers") were improved 35% by the new driver. Normally we would conclude that these new drivers work better with the Radeon 8500's HyperZ II but then we would have seen a similar boost in all games not just these Direct3D theoretical tests.

Now that we know how useful VillageMark is as a theoretical benchmark let's take a look at how well the GeForce3 fares without its precious Visibility Subsystem:

GeForce3 - Z Occlusion Culling Performance
VillageMark - 1024x768x32
Visibility Subsystem Enabled

Visibility Subsystem Disabled

63

38

|
0
|
13
|
25
|
38
|
50
|
63
|
76

For the GeForce3 you've got to remember that NVIDIA's Visibility Subsystem is much more than just Z-Buffer compression and Z-Occlusion Culling, it includes NVIDIA's Crossbar memory architecture. The way we "disabled" the Visibility Subsystem was by running a GeForce2 at 200/460 (core/mem) which gave us the same theoretical fill rates as the GeForce3 without the enhancements of the Visibility Subsystem. Because there are other memory bandwidth enhancements at work outside of those that ATI implements we decided to see how the performance changed when switching to 16-bit color where memory bandwidth isn't as important. It is worth it to note that there is no performance difference between the 8500's 16-bit and 32-bit scores.

GeForce3 - Z Occlusion Culling Performance
VillageMark - 1024x768x16
Visibility Subsystem Enabled

Visibility Subsystem Disabled

52

42

|
0
|
10
|
21
|
31
|
42
|
52
|
62

Again we see a healthy boost from the Visibility Subsystem but you'll notice that the boost isn't nearly as great as it was with 32-bit color enabled indicating that a significant part of the performance improvement is due to NVIDIA's Crossbar Memory controller and not the other features of the Visibility Subsystem. In both of these cases the GeForce3 is not able to realize the same performance gains that the Radeon 8500 does when HyperZ II is enabled.

There was little doubt that the Radeon could hold its own against the GeForce3 and even the new Titanium line on paper, but where it really matters is in the real world. It's time to step out of Pleasantville...

Theoretical Performance Black & White Performance
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now