Original Link: http://www.anandtech.com/show/858


Just about two weeks ago we provided you with a look at Microsoft's first entry into the video game console market. Today we are here to continue this series on the Hardware Behind the Consoles with a look at the industry giant Nintendo and their latest and greatest.

Today we bring you an in-depth look at the Nintendo GameCube; you'll learn about what makes it tick and more importantly, how it stacks up to the Sony Playstation 2 and Microsoft's Xbox. So without further ado, it's on to the Cube.

The GameCube CPU

While the Xbox was a PC turned game-console, Nintendo's GameCube was designed as a game console from the ground up. Nintendo's needs for this next-generation console were very clear: the chip would have to be powerful, cheap to manufacture, and run cool enough that it could fit in a very small enclosure. Nintendo ended up contracting IBM to handle the production of the CPUs for the GameCube based on their well-known PowerPC 750CXe processor.

However details on this processor are sketchy at best but the information we've been able to gather points at a relatively unmodified PowerPC 750CXe microprocessor with the addition of close to 40 new instructions (potentially SIMD FP) designed to specifically aid in game performance. Followers of the PowerPC architecture will quickly realize that these additional instructions do not comprise all of the instructions provided by Motorola's AltiVec SIMD instruction set. It is possible that only a subset of AltiVec was implemented into this processor, using instructions heavily geared towards the tasks that it would be handling.

The basics of this PPC 750CXe derivative (codenamed Gekko) are fairly simple; the PowerPC core features a 4-stage basic integer pipeline which is mostly responsible for the very low clock speeds the core is able to achieve. Most important for gaming performance however are more precise floating point calculations and the Gekko's floating point pipeline is 7 stages long. Since the Gekko is a native RISC processor it does not suffer the same fate as its Xbox counterpart in that it doesn't have to spend much time in the fetch/decoding stages of the pipeline. Immediately upon fetching the RISC instructions to be executed, they are dispatched and one clock cycle later, they are ready to be sent to the execution units.

The PowerPC architecture is a 64-bit architecture with a 32-bit subset which in the case of the Gekko processor, is what is used. The CPU supports 32-bit addresses and features two 32-bit Integer ALUs; separate to that is a 64-bit FPU that is capable of working on either 64-bit floats or two 32-bit floats using its thirty two 64-bit FP registers. This abundance of operating registers is mirrored in the 32 General Purpose Registers (GPRs) that the processor has, dwarfing the Xbox's x86-limited offering (8 GPRs).

Although both the Gekko and the Intel CPU used in the Xbox are built upon advanced 0.18-micron processes, the Gekko is held back by its relatively short pipeline limiting it to generally no higher than 500MHz. The Gekko does use Copper interconnects which are superior to their Aluminum counterparts (used in the Xbox CPU for example) in that they more efficiently conduct electricity, but this advantage is still not able to result in a higher clock speed for the CPU. In the case of the GameCube, the CPU is clocked at 485MHz, or 3 times its 162MHz FSB frequency. The benefit of a shorter pipeline is of course, an increased number of instructions that can be processed in those limited number of clocks. However from all of that data that we have seen comparing the PowerPC 750 to even the desktop Intel Celeron processor, it does not seem that the Gekko can compete, performance-wise.

Your experience in the PC hardware world however should have taught you that CPU performance does not matter when it comes to games as long as you are bottlenecked elsewhere in the system, so theoretically Gekko could be more than enough for the GameCube but we have a feeling it's not.

Instead of being a processing powerhouse, Gekko was actually chosen for its physical characteristics. Although it does have a larger on-die L1 & L2 cache than the Xbox CPU (64KB/256KB vs. 32KB/128KB) and is composed of more transistors (over 21 million vs. approximately 9 million for the Xbox CPU), Gekko's die is under 45 mm^2. For comparison, the processor used in the Xbox has a die measuring approximately 100 mm^2.

The Gekko is actually a very cool running CPU, dissipating around 5W at its 485MHz operating frequency. Again, when compared to the Intel CPU used in the Xbox, you're looking at roughly three times more being produced by the X-CPU than by the GameCube's Gekko.

So while isn't as powerful at the Xbox CPU, Gekko's smaller die and cooler operation provide for lower manufacturing costs and a smaller sized console which fit Nintendo's goals perfectly.

Gekko does have more FSB bandwidth at its disposal than the X-CPU, simply because its FSB is running at 162MHz vs. the 133MHz FSB frequency that is within the limits of Intel's AGTL+ spec. This results in a 1.3GB/s connection between Gekko and the North Bridge, which like in the case of the Xbox's nForce-based platform, is integrated into a single chip along with the graphics core.

It's time to meet a close friend of Gekko; we call him Flipper.

A glimpse into ATI's future?

A little over a year ago ATI completed the acquisition of a company called ArtX. You may have heard us talk about ArtX in the past as they produced an integrated graphics core for an ALi Super7 chipset entitled the Aladdin 7 a couple of years back.

While the chipset was too little, too late for the Super7 market, ArtX had actually caught the eye of Nintendo and was contracted to produce the graphics core for their next-generation gaming console. This little known company ended up being acquired by a much better known player in the industry, ATI.

However at the time of acquisition in mid-2000, the design for the GameCube was complete as were the designs for ATI's next-generation desktop chips. So although the two companies were now under one roof, their technology was not able to mix in the production of the GameCube graphics core. One thing that ATI did very wisely was the agreement to place an ATI sticker on the front of the GameCube. This sort of branding is unbeatable since they're essentially raising a group that will eventually grow to millions, to support the ATI name. When a satisfied GameCube owner goes to buy a new video card, it's more than likely that the ATI name will catch their eye first.

The GameCube motherboard is home to only two chips outside of all the memory chips on the board. The first chip you know of, and that is IBM's Gekko processor. The second chip is the integrated North Bridge, I/O controller, and graphics processor produced by the ArtX team that is now a member of ATI. This chip not only dwarfs Gekko in size, but it also is much more interesting to talk about; this chip is named, Flipper.

Flipper is a 51 million transistor chip, again built on a 0.18-micron manufacturing process (this time, not using copper) and it's produced by NEC. In spite of the massive transistor count, and the tremendous amount of functionality of the core, Flipper is about 106 mm^2 in size, making it just about as big as the Xbox CPU. The only letdown here is that Flipper is still built on a 0.18-micron process while ATI's new GPUs as well as the Xbox IGP are built on a 015-micron process which definitely reduces their die size by approximately 30% over their 0.18-micron counterparts. During the course of next year the potential for the Xbox IGP to move down to a 0.13-micron process is there as well since TSMC's 0.13-micron process will have matured considerably by the 2nd half of 2002. We'll explain the reason why Flipper is stuck at 0.18-micron later.

The role of North Bridge is played by Flipper in that it features a 64-bit interface to the Gekko CPU running at 162MHz. The entire Flipper chip runs at 162MHz which lends itself to much lower latency operation since all bus clocks operate in synch with one another. This also means that the graphics core in Flipper runs at 162MHz as well.

The Flipper graphics core is a fairly simple fixed function GPU aided by some very powerful amounts of memory bandwidth, but first onto the architecture of the graphics core. Flipper always operates on 4 pixels at a time using its 4 pixel pipelines; each of those pipelines is capable of applying one texture per pipeline which immediately tips you off that the ArtX design wasn't influenced by ATI at all. Since the Radeon and GeForce2, both ATI and NVIDIA's cores have been able to process a minimum of two textures per pixel in each of their pipelines which came quite in handy since none of today's games are single textured anymore.

The fact that the Flipper's T&L is a fixed function T&L unit is a bit of a disappointment as well but it would have been impossible for ArtX to implement ATI's SmartShader programmable pixel and vertex shaders into their design and still meet Nintendo's strict deadlines. The one thing that is playing to the GameCube's favor is that the Flipper GPU was designed solely with console gaming in mind, and the input that went into the T&L unit was much more closely tied to the developers than some of the earlier T&L units for desktop PC graphics cards. Although it may be better suited for its target use than the earliest T&L units for PCs, there is no skirting the fact that with a fixed function T&L pipeline there are limitations to exactly what game developers will be able to do. After seeing what over two years of fixed function T&L support in games for the PC was like, we'd hope for much more out of developer use of Flipper's GPU.

Anti-aliasing is very important when it comes to console games and it's thus very important that Flipper offer AA support. The core does feature support for a 7-sample multi-sample AA algorithm but it's clear that turning on 7-sample AA isn't exactly the most realistic option. ATI informed us that the number of samples is adjustable and can be set by the developer, but as was the case with Xbox, we did not see any examples of AA being implemented in any of the launch titles.

Based on the operating frequency of the core (162MHz) you can tell that the Flipper graphics core isn't a fill-rate monster, but what it is able to do is portray itself as a very efficient GPU. The efficiency comes from the use of embedded DRAM.

"If cache is so fast, then why isn't everything made out of it?"

One of the most interesting things about the GameCube design is its focus on memory bandwidth efficiency. It attains this efficiency through the use of a special type of memory known as 1T-SRAM that offers lower latency operation and higher overall bus utilization than conventional DRAM. But before you understand exactly what that is, you have to look at the differences between conventional DRAM and SRAM.

The cache on the die of the Gekko CPU or any other CPU for that matter is a type of RAM known as Static RAM or SRAM. The prefix static comes from the fact that unlike DRAM (Dynamic Random Access Memory), SRAM cells do not have to be constantly "refreshed" in order to retain their data (since DRAM is capacitance based, it loses its charge after a while requiring a refresh of that charge in order to retain its data). One of the reasons DRAM is so much slower than SRAM is because of this constant refreshing process. It turns out that when reading the contents of a DRAM cell, the cell is actually refreshed making the most common way of refreshing DRAM cells to actually read the contents of the cell.

This is perfectly fine except for when the contents of the cells being refreshed are being read from or written to. SRAM avoids this by using a combination of usually 4 to 6 transistors to statically hold the data being stored in the memory. DRAM on the other hand only uses a single transistor in combination with a capacitor to hold data; the introduction of the capacitor greatly reduces the die size of DRAM cells thus making them cheaper to manufacturer but also introduces the problem of refreshing as we mentioned above.

Here you can see the problem with conventional SRAM being used in mass quantities since you can get multiple times the amount of SRAM out of DRAM at the same cost. The cost of SRAM prohibits it from being used as a main memory solution, but it makes perfect sense for use in small amounts such as in a cache.

A company by the name of Monolithic System Technology, Inc. (MoSys) came up with a clever design for DRAM that give it many of the performance benefits of SRAM without incurring a huge cost penalty.

The technology that has garnered all of the attention for MoSys is what they like to call 1T-SRAM. The name implies that they have been able to produce SRAM using only a single transistor (1T) instead of the 6 transistors that are much more common. The reality of the situation is that 1T-SRAM is much more like a special form of DRAM than it is like SRAM. The reason being that 1T-SRAM still requires its memory cells to be refreshed in order to retain their data, the only difference being in its very efficient method of refreshing those cells. According to MoSys, their 1T-SRAM design can hide the refresh process quite effectively to the point where they can claim latency and bandwidth figures that would rival those of conventional SRAM (although not surpass). Obviously it's very difficult to test since there have been very few cases where 1T-SRAM has been used in a testable platform, but it's clear that the technology does allow for lower latency accesses and higher memory bandwidth utilization. But at what cost?

MoSys claims that a 64Mbit 1T-SRAM has a die that is 10 - 15% larger than a 64Mbit SDRAM. While that may not seem like much, do keep in mind that a 64Mbit RDRAM device is 15 - 30% larger than the same 64Mbit SDRAM. This would put the additional cost in terms of die size of 1T-SRAM equal to anywhere between 1/3 and 1/1 of the added cost of RDRAM (production cost excluding license royalties) over SDRAM. However, 1T-SRAM is still cheaper than regular SRAM again because of the fact that it is manufactured using a single transistor vs. 6 for most SRAM designs.

The performance aspects of 1T-SRAM are very difficult to quantify because we've never seen it on a benchmarkable platform making the assessment of its value equally difficult. Needless to say that we didn't present you with this explanation for no reason, as Nintendo saw it fit to make heavy use of MoSys' 1T-SRAM in their GameCube design.

Embedded DRAM in Flipper

On the Flipper side of things, using NEC's 0.18-micron embedded DRAM manufacturing process, MoSys' 1T-SRAM is used on Flipper's die to provide two very large caches: a 2MB Z-buffer and a 1MB texture cache. This is cheaper than outfitting the chip with 3MB of SRAM which would rival most server CPUs in terms of die space and cost (GameCube would not be as successful if Nintendo lost $500 per console nor would it be successful if they charged $700 per console either) and it's theoretically faster than conventional embedded DRAM for the aforementioned benefits of 1T-SRAM.

The Flipper GPU is composed of 51 million transistors, approximately half of which are dedicated to this on-die 1T-SRAM. If Flipper were to use conventional SRAM it would feature over 170 million transistors and have a die much larger than both of the Xbox chips put together. The decision to use 1T-SRAM instead of conventional SRAM was necessary in order to outfit Flipper with this much memory.

The 2MB Z-buffer/frame buffer is extremely helpful since we already know from our experimentation with HyperZ and deferred rendering architectures that Z-buffer accesses are very memory bandwidth intensive. This on-die Z-buffer completely removes all of those accesses from hogging the limited amount of main memory bandwidth the Flipper GPU is granted. In terms of specifics, there are 4 1T-SRAM devices that make up this 2MB. There is a 96-bit wide interface to each one of these devices offering a total of 7.8GB/s of bandwidth which rivals the highest end Radeon 8500 and GeForce3 Ti 500 in terms of how much bandwidth is available to the Z-buffer. Z-buffer checks should occur very quickly on the Flipper GPU as a result of this very fast 1T-SRAM. Also, the current surface being drawn is stored in this 2MB buffer and then later sent off to external memory for display. Because of this, dependency on bandwidth to main memory is reduced.

The 1MB texture cache helps texture load performance but the impact isn't nearly as big as the 2MB Z-buffer. There are 32 1T-SRAM devices (256Kbit each) that each has their own 16-bit bus offering 10.4GB/s of bandwidth to this cache.

The first thing that should tip you off about these 1T-SRAM devices on the Flipper die is that they would come quite in handy on a PC platform. Although the Flipper GPU will never be asked to render at greater than 640 x 480 (not a very memory bandwidth intensive resolution), very few gamers will settle for anything less than 1024 x 768 with today's graphics cards. A similar 2MB on-die Z-buffer would improve performance tremendously, especially considering how much more memory bandwidth is consumed in most PC games. While it would be nice for ATI to consider the use of some of this style of technology in their future PC products, the cost would be highly prohibitive.

The potential for Flipper to become cheaper to produce as time goes on is also there. NEC is currently in production of their 0.15-micron embedded DRAM process but it is not as mature as their 0.18-micron eDRAM production which is why Flipper is currently produced on that. By the second half of next year the 0.13-micron eDRAM process should be ready for production which means that we should be able to see 0.13-micron Flipper GPUs produced in 2003. The move to a 0.13-micron process could cut the 106 mm^2 Flipper die in half, making it much cheaper to produce but that is all dependent on NEC.

1T-SRAM outside of Flipper

The GameCube features another 24MB of 1T-SRAM that is located outside of the Flipper chip which is used as its main memory. This 64-bit memory bus runs at 2x the operating clock of Flipper, or 324MHz resulting in 2.6GB/s of bandwidth between Flipper and the Cube's main memory. While this is only as much memory bandwidth as an original GeForce 256, remember that all of the Z-buffer accesses are done on the embedded 1T-SRAM and the frame being drawn is done initially to the on-die 2MB buffer. This reduces the need for as much main memory bandwidth as we're used to seeing.

There are two 12MB 1T-SRAM chips located on the GameCube motherboard

But we're not too sure about Nintendo's decision to continue to use 1T-SRAM even for the main memory. There are significantly faster DDR SDRAM devices out there, not even as fast as the 200MHz DDR featured in the Xbox that would provide much more memory bandwidth than the 1T-SRAM of the GameCube. This is one of the points that it was unfortunately that the design was completed prior to the ATI acquisition of ArtX since we may have seen a Radeon-like 128-bit DDR memory bus used for the main memory in the Cube. The only benefit the Cube gains from using 1T-SRAM as its main memory is low latency access and thus better bus utilization; coupled with the fact that the memory bus is synchronized to the Gekko's FSB and Flipper's operating frequency (162MHz x 2) we can assume that latency is reduced as much as possible with this design. Again, we see elements of the efficiency of GameCube rather than a focus on raw power.

Flipper makes noises

Flipper is also home to a custom Macronix DSP that essentially does the job of NVIDIA's APU in the Xbox. The only difference is that the Macronix DSP is not powerful enough to perform real-time Dolby Digital Encoding without significant latency penalties. The latency induced by the encoding on the Xbox is minimal at worst and we have confirmed this through our extensive testing of the nForce APU; we could not determine any induced delays in our tests.

Nintendo doesn't provide a digital output on the console itself so there's no way for a developer to perform the real-time encoding if they felt they had the extra power left over to do so, which is exactly what EA did with DTS encoding on the Playstation 2. We have seen one developer implement something beyond the regular Stereo or Pro Logic audio output with Dolby Pro Logic II in Star Wars Rogue Squadron II.

Dolby's Pro Logic II is an algorithm that extracts 5.1 audio out of a stereo signal by comparing the differences and similarities between the two signals. This is known as matrix surround decoder since it produces more channels than are in the original signal. Although it's not nearly as good as the 5.1 discrete audio signals found with Dolby Digital or DTS, it's far better than the original Pro Logic, which has its roots in the 1970's. It's also backwards and forwards compatible with the original Pro Logic, meaning that a Pro Logic II encoded signal can be played back on a Pro Logic receiver and vice versa.

In the case of the Game Cube, the Pro Logic II encoded signal can be played back on a stereo or Pro Logic device as well for compatibility, although the Pro Logic decoded version will only have a mono surround and a bandwidth limited center and surround channels (a low pass filter at 7kHz is applied in the Pro Logic decoder to these channels). Further, channel separation is not nearly as good with a standard Pro Logic decoder.

A discrete 5.1 signal, on the other hand, is full bandwidth for the 5 main channels and by its very nature supports complete channel separation with targeting of a specific sound to a specific channel.

Pro Logic II is only available on some of the very latest surround sound receivers, while Dolby Digital has been available on most receivers for the past few years. Pro Logic II will eventually be a standard on all receivers, but probably not for another year or two. Any receiver with Pro Logic II will also have Dolby Digital and DTS support. The original Pro Logic has been included on just about every surround sound receiver over the past 15 years.

An interesting thing about the audio processor is that it is connected to 16MB of DRAM via an 8-bit memory bus running at 81MHz (1/2 of Flipper's operating frequency). Obviously, 16MB of memory is a lot of memory for audio processing so developers are able to use any part of that memory as regular storage of data that doesn't need that much memory bandwidth since there is only 81MB/s of bandwidth to this audio DRAM.

A true console's I/O

The GameCube has no built in hard drive, Ethernet or modem although there are expansion ports for all of these devices. Nintendo has already shown off what will become their Ethernet and modem adapters for the console but it is questionable how successful a major add-on product will be on a console. History has shown us that there is very little support for things like add-on storage devices mainly because of a lack of developer support. Developers are already limiting their options by releasing a title on a specific console, but they further limit themselves if they require the purchase of an add-on such as a hard drive or Ethernet adapter as well. This time around may be different, but based on things such as the Sega CD and the horribly executed 64DD drive we wouldn't expect too much potential from add-on products for the GameCube.

Click to Enlarge

The console does feature two serial ports and a high-speed parallel port for these future add-ons, all of which are driven by the Flipper chip which houses the IO controller for these ports. The two serial ports are proprietary designs (not USB like the Xbox) that can transfer at speeds up to 27Mbps. While this means that the Ethernet adapter will be limited to far below 100Mbps, 27Mbps is more than enough considering you won't be copying large files to anything on the Cube. The bandwidth is more than enough for the forthcoming 56K modem.

The high-speed parallel port is also a custom design capable of transferring data at up to 81MB/s (the same speed as the Cube's internal audio DRAM). This would be more than enough for a hard drive.

Save games are stored on memory cards that are not bundled with the system. These memory cards measure 512KB in size and are known as the Memory Card 59 because of their 59 save blocks. Most games take anywhere between 1 and 4 blocks for general saves although some take more like 10 or even up to 40 for saving a full season of Madden 2002. The benefit to these smaller cards (compared to Sony's 8MB card) is that they can be priced much cheaper at $15 for more than enough space for most users. Accessing the memory cards is considerably faster than accessing the PS2 memory card.

Again, it is questionable how successful these add-on parts will be over the lifespan of the Cube. With Xbox owners currently taking advantage of the integrated Ethernet (a very cheap thing to include), we'll just have to wait and see how the Cube fares online.

DVD, but not

Nintendo has always made it a point to stick to proprietary storage media for their games. Some consider this to be a method of avoiding piracy, others consider it an example of Nintendo's elitist attitude towards the industry; regardless of what you may believe, the GameCube's storage medium does suit the device just fine. This is in contrary to the N64 where the decision to stick with cartridge based games left many complaining because of a lack of space.

Nintendo employed a proprietary mini-DVD specification for their games, using small 3"/1.5GB discs for games. This is in comparison to the full-sized 9GB DVD9 discs that the Xbox uses.

The drive itself is a CAV drive meaning that it can read data per second off of the outer tracks of the disc than on the inner. The Xbox DVD drive is also a CAV drive capable of reading at anywhere between 2.5MB/s and 6.25MB/s. The GameCube drive is slower and can read between 2MB/s at the inner tracks and 3.125MB/s at the outer tracks.

In spite of this, none of the GameCube games had any significant load times. It should be noted that none of them used as detailed textures/scenes as the Xbox games we were comparing them to.

Because the drive can't physically support DVDs, the GameCube cannot double as a DVD player. Nintendo's official stance on this is that the system is designed to be a pure gaming console and nothing more. Panasonic will be releasing a version of the Cube in Japan with support for DVD playback but there have been no plans to release a similar product elsewhere.

The Resolution Game

With the Xbox there are a number of supported DTV and HDTV resolutions including two of the more interesting ones - 720p and 1080i. The GameCube offers basic support for 480i and 480p, but that is all.

On the rear of the console there are two ports; one labeled analog AV for all composite and S-video connections and the other incorrectly labeled digital AV (it's still an analog signal) for component connections.

The component cables are priced at $30 and require that you use the analog AV cables for audio output and the digital AV cables for the Y Pr Pb component connections. Unlike the Xbox HD pack, these cables go directly from the console to the TV. This theoretically is a better connection than the Xbox HD pack which requires that the cables connect to an external box but in reality, as long as the pack is not positioned in an area prone to interference you're fine. We are aware that you can purchase Monster component cables for the Xbox but we've got our own issues with horribly overpriced cables.

Control over whether a game runs in 480i or 480p mode is controlled from within the game. In most games if the digital AV connection is present, the game will ask you if you'd like to proceed in progressive scan (480p) mode.

A heavenly controller

The US Xbox controller is manageable, it's functional, but it's big. There's no getting around that fact.

This board is what the four controllers plug into on the console. It is also home to the BIOS battery and the reset switch (top left).

Nintendo did an excellent job with the GameCube controller, making it mold perfectly to almost any set of hands although a few of us found it a little too small. The size of the controller forces it to have less powerful rumble motors than the Xbox controller which isn't necessarily a bad thing.

The GameCube controller is a more natural fit than either of the competing two.

The only real gripe we had about the controller was that the cords were entirely too short at only 6' compared to the 9' Xbox controller cords.

Disassembling the unit

The GameCube is an extremely small unit whose size you can't fully grasp until you actually see it in person. Nintendo is very adamant about keeping people out of the box so you'll have to acquire a special bit in order to unscrew the four screws that hold the box together. This bit is sometimes referred to as a 'gamebit' (picture to the right) and a quick Google search will bring up a number of places where you can buy them. The bit should measure 4.5mm which is the same size as the one needed for the N64. If you have one that can open N64 cartridges that is a 3.8mm gamebit and won't work with the GameCube.

As usual, we take no responsibility for any damage you cause your system by following these directions and doing this will void your warranty.

1) The first step is to unscrew all four of the screws using the gamebit. Lifting the cover off will reveal this:

Click to Enlarge

In the above picture we've already removed the front and back covers which easily come off after the top is removed. The wires towards the back are for the fan which is pictured below:

The fan assembly also comes out very easily and has the power connector and power switch on it as well.

2) Removing the mini-DVD reader will reveal the shielding over the motherboard, Gekko and Flipper. Unscrew the screws to reveal a large heatsink as seen below:

Click to Enlarge

3) Next you'll want to unscrew the six screws that hold the heatsink in place; you may need a smaller screwdriver to do this since four of the screws are in between fins on the heatsink. The heatsink will then slide around and you should be able to pry it off. Be sure not to damage the PCB when prying off the heatsink; it'll come off, you just need to use a little force:

4) Now you can have a good look at the motherboard. The two connectors at the front are for the memory cards, the two connectors at the back are for the analog and "digital" AV outputs. The connector on the board towards the upper right is the interface to the mini-DVD drive.

Click to Enlarge

5) Pulling out the motherboard you are now able to look at its backside:

Click to Enlarge

Clockwise starting at the upper left we have the power connector, serial ports 1 & 2 and finally the high-speed parallel port connector.

6) The GameCube's size is significantly reduced because of the fact that the power supply in the unit does not perform any AC-DC conversion. This conversion is handled in an external power brick that ships with the console.


While the PS2's Emotion Engine has a lot of potential, developers have continuously stated that the platform is too difficult to program for. With both GameCube and Xbox using widely available and common CPU platforms, the real competition exists between the Cube's Gekko and the Xbox's Intel CPU.

In terms of raw performance, the Celeron 733 (4-way set associative L2) will outperform the PowerPC 750 running at 500MHz in any of the synthetic benchmarks we've seen. We can only assume that a 733MHz CPU with a 133MHz FSB and 8-way set associative L2 cache would only be faster than the Gekko giving the Xbox the CPU performance advantage.

Both platforms have good compiler support and the tilt of the hat goes to IBM's Gekko in terms of having a very flexible ISA.

Where the GameCube does clearly come out on top however is in heat production and die size. The Gekko produces around 1/3 the amount of heat as the Xbox CPU and measures in at close to half of the die size. This leads to tremendous cost savings in the production of the CPU that translates into the ability to price the GameCube at $199 instead of $299 like the Playstation 2 and Xbox.


The PS2's Graphics Synthesizer is entirely too dependent on extreme parallelism in order to fill its 16 pixel pipelines which could be the cause of many of the slowdowns we've seen in games for the platform. Many of Electronic Arts' titles have been ported to both GameCube and Xbox and the first thing everyone seems to notice is that the slowdown problems that existed with the PS2 are now gone.

The GameCube wins in terms of GPU efficiency courtesy of the embedded 1T-SRAM from MoSys. However the use of a fixed function T&L pipeline is a bit of a turn off for the GPU. Again this is another situation in which it would have been beneficial to have ATI's input into the design of the product before it was finalized. It is a shame that ATI acquired ArtX after the design was already completed otherwise we might have seen a programmable T&L pipeline instead.

Raw GPU power and feature set does go to the NV2A core that is in the Xbox. Games such as Dead or Alive 3 are perfect examples of how easy it is for developers to write these custom pixel and vertex shader programs as well as how great the results can be.

Both Flipper and the NV2A support texture-compression which plays a major role in the use of higher-resolution textures in games. On the launch titles for the GameCube we've seen a number of lower resolution textures being used compared to the Xbox launch titles. That could just be a sign of the early adopters not taking advantage of the technology yet or it could be due to a lack of main memory bandwidth, it's too early to tell.

Audio & I/O

The clear winner when it comes to audio is the Xbox. While Dolby Pro Logic II support is great, it isn't widely supported by most of today's receivers and lacks many of the benefits of Dolby Digital 5.1.

Also from an I/O standpoint the Xbox comes out ahead as well because of its built in hard drive and Ethernet adapter. There have been too many failures in the past of console add-on products to expect incredible success from any add-on product to either of the competing consoles. What is interesting to note is that in spite of the hard drive and faster DVD drive, Xbox load times are still not dramatically better than the GameCube load times.

We have yet to compare one title on both platforms to figure out which one loads faster (in theory the Xbox should) but current GameCube titles experience much quicker load times than Xbox titles.

Final Words

Both the GameCube and Xbox are clearly superior to the PS2 in terms of the quality of the graphics seen in games available today. The transition from PS2 to GameCube and/or Xbox is a fairly large leap, but going between GameCube and Xbox is a bit less dramatic.

From what we've seen based on the launch titles that are currently available, the Xbox takes the crown in terms of visual appeal from games today. Titles such as Rogue Squadron II and Super Smash Brothers Melee for the GameCube do show off some of the Cube's power but the graphics quality does not match what titles like DOA3 are able to produce on the Xbox.

It's entirely too early to crown one platform a winner but based on specifications alone, Xbox is the more powerful console overall. Although the Flipper GPU's use of 1T-SRAM embedded into its die improves performance considerably, the overall package is not as powerful as the Intel/NVIDIA combination beneath the Xbox hood. Features such as real-time Dolby Digital Encoding as well as a very powerful programmable T&L core whose instruction sets have been publicly available for the past year now are only the tip of the iceberg. The inclusion of isochronous channels within the Xbox's HyperTransport link guarantee uninterrupted bandwidth to those tasks that require it which is very important when dealing with something like DD encoding, streaming off of the hard disk or network accesses.

What Nintendo has going for themselves is a console with tremendous amount of support, a history of great first party titles as well as a tremendous focus on gameplay and quality. The GameCube is very efficiently designed and is undoubtedly cheaper to manufacture because of the Gekko/Flipper chips; with the exception of the Xbox it is leaps and bounds beyond the other consoles and should be a healthy competitor in the future as well.

On the desktop side of things, Xbox gave us a preview of what to expect from the next-generation NVIDIA part but what does ArtX's Flipper design tell us about the direction ATI will be going in the years to come? It's clear that ArtX's technology could have benefited from ATI's intervention, but ATI acquired the company for a reason and it's what ArtX can contribute to ATI that we are most intrigued with.

As usual, only time will tell the outcome of this and many questions we've asked throughout this article and series. We hope you've enjoyed our coverage on both Microsoft's Xbox and Nintendo's GameCube.

Log in

Don't have an account? Sign up now