ASUS just dropped off their new Striker II Extreme board based on NVIDIA's upcoming 790i chipset featuring official support for the soon to be released 1600FSB processors and DDR3. Of note, ASUS is utilizing a slightly updated water block design, backplates on the MOSFET heatsinks and other critical areas, along with screw-down mounting for the SPP and MCP cooling blocks.  The board features an 8-phase power delivery system with a 2-phase design for the memory.  ASUS will have a full accessory package that will include a 3-way SLI connector among other things.
 
Overall, the board has a subdued appearance thanks to a minimalist color palette but the board still gives the impression it means business.  However, those who decide to use 3-way SLI (standard cooling solutions) will lose the ability to physically install cards in the single PCI Express x1 slot and the two PCI slots.  Our first look at the BIOS indicates similar configurability and options that the X48 based Rampage Formula board has at this time.  Early testing is already showing performance gains (at like memory performance/cpu settings) over the 780i in SLI testing.  The 790i chipset does away with the bridge chipset utilized on the 780i to provide PCI Express 2.0 capability.  Overclocking with an early BIOS spin is also showing improvements over the 780i, especially with Yorkfield and Wolfdale processors. 
 
Here are the first screen shots, we will have additional information regarding the chipset and extensive test results when the product launches in March. 
   
 
 
 
 
 
 
 
 
 

 

ASUS Striker II Extreme - 3DMark06


ASUS Striker II Extreme - 3DMark05


Comments Locked

24 Comments

View All Comments

  • mcBullet - Saturday, February 23, 2008 - link

    @mvrx: I had not heard of that but it sounds like a really great idea. Wspecially with 3 way sli people not being able to have an audio card.
  • mvrx - Saturday, February 23, 2008 - link

    Skipping PCIe 3.0 (or at least lessening it's relevance) and getting the entire industry behind optically interconnected components eliminates the issue of running out of slots. At least it does to a high level.

    Imagine if you wanted to expand your PC, and you've already used the (let's say) 6 optical interconnect ports to your cards, you drop in a network switch (perhaps it would fit in a 5.25" bay), connect 2 of the motherboards optical ports to it and it breaks out to 8 new ports. Or you buy another PC chassis, set it down next to your current PC, and connect up a few of the optical ports to expand.

    My big dream is the possibility of buying a new PC, clustering it at native bus speeds, and by going into a BIOS level virtual machine controller, you simply add the new PC's resources to your old PC creating a scaling resource pool. This is still a ways away because Intel and Microsoft don't want you scaling hardware and buying less hardware over time. Or in Microsoft's case, they want to force you into buying a copy of the OS for every device you purchase.

    I want to see the high end cluster technologies start at the workstation instead of the laboratory. Make it so common, that it becomes cheap commodity technology that everyone enjoys.
  • mvrx - Saturday, February 23, 2008 - link

    The low cost 100GbE interface chips are more likely to be a reality after 32nm 10GbE chips are brought to market. These will have complete hardware offload engines built in and hopefully be natively stackable (bonding ports).

    So until IEEE finalizes the 100GbE standard, multi-channel 10GbE bonded tech will probably see a future in device interconnection applications like multi GPU arrays.

    100GbE will hopefully be a natural step that could serve as the interconnection architecture for a more modular computer design. AMD is doing some work to create quite a few modular processing pieces so the home computer can actually scale.
  • AggressorPrime - Saturday, February 23, 2008 - link

    I'm pretty sure by the time 100GbE comes mainstream and affordable to implement massively on the motherboard, an external PCIe 3.0 solution would have come out for much cheaper and faster. Then again, implementing multiple external cards is already possible via Tesla. You have to realize though that utilizing multiple GPUs in a game for graphics is very difficult and has only proven successful in CoD 4 in which 4 Crossfire GPUs improved performance to a level that had an average 92% efficiency for each GPU (http://www.anandtech.com/video/showdoc.aspx?i=3232...">http://www.anandtech.com/video/showdoc.aspx?i=3232... yielding a 368% performance of the maximum (400% via 4x the GPUs). The best we can hope for then before we can try more than 4 GPUs for graphics is that other games start utilizing 4 GPUs as well as CoD 4 and drivers be made to support that. The biggest problem is the CPU as they are just to slow these days to process physics. With nVidia implementing PhysX in their CUDA GPU's, the arena can change drastically. Until we see games start utilizing this technology, we can start thinking about more GPU's for graphics.
  • mvrx - Saturday, February 23, 2008 - link

    Hey AgressorPrime,

    I just read that the EagleLake chipset due out in mid 2008 will feature 10GbE ports. ( http://news.softpedia.com/news/Intel-Eaglelake-Chi...">http://news.softpedia.com/news/Intel-Ea...pset-Wil... ). Which I find impressive as I thought this was going to be early 2009.

    What will really impress me is if they can add TCP/IP Offload Engine (TOE), iSCSI offload engine, and keep the latency below 10 microsecond - which is important for inter-process communication in clustering applications. Channel bonding would also be cool. ( see: http://www.hpcwire.com/hpc/328051.html">http://www.hpcwire.com/hpc/328051.html - old article but informative)

    So.. If intel can put a 10GbE interface on a workstation chipset, I'd assume a solution can be had where a GPU card could have a couple ports.

    As for your comments on the GPU scaling. I can't say that I really have an answer. I would assume with enough time, both the applications and DirectX could be adapted to spread more processes out to more and more GPUs. I could see the GPU array needing a dedicated network memory device (a external box with 4-8 DDR2/3 slots and a couple 10GbE ports) to share rendering data.

    One of the biggest hurdles for Server Based Computing (SBC) solutions is that there is no way to effectively share GPU resouces to more than 1 or 2 VM's without severe performance issues. Separating GPU resources into a cluster/network addressable resource pool (with its own secondary memory subsystem) would be a big step to solving that.

    I do like what is happening with AMD's fusion, Intel's 80 core generic cpu/gpu, and Nvidia's Tesla - projects. However, I still don't see the solution I dream of - I want (somewhat) limitless scaling of my graphics performance. If I am crazy enough to buy 16 cards to play my game, I want a method that isn't restricted by a motherboard (or worse yet custom chips from Nvidia) to do it.

    I'm curious just how fast (and what latency levels) the SLI bridges connect at.

    System on a Chip designs are becoming the cool way to go for this field. I won't be surprised if we start seeing the communication pieces of motherboard and network chipsets becoming mini-computers on their own. With all the offload functionality being put into these, it would be feasable that your video card, or motherboard, boots up embedded processes that are controlled by a Hypervisor VM host.

    I once proposed that the onboard RAID controller being built into chipsets be granted a chunk of onboard system ram to act as a cache. Imagine your motherboard supporting 32GB of ram and the BIOS using 4-8GB as a cache. Unfortunately, my idea was shot down rather quickly.
  • mvrx - Saturday, February 23, 2008 - link

    Sorry to go on about this, but I just found a link I was thinking of.

    This kind of gives a hint as to some of the plans for optical interconnection of chips, and possibly devices.

    http://www-03.ibm.com/press/us/en/pressrelease/227...

    Imagine the 32nm Cell B.E., paired with a good GPU, and interconnected by the hundreds via this technology. *Finally*, I can get plugged into the matrix without those annoying pixels ruining my fantasy.. ;-)

    IBM won't say publicly, but the tipped us off in a couple interviews, that the POWER7 core will have some form of optical interconnect technology. And if you look closely at Sony's playstation 4 PS4 street talk, Sony is talking about optical interconnect for the unit. Could this be because they want the PS4's clustering abilities to kick arse? I could see the government thinking 100,000 PS4's at $350/ea makes for a bargin super computer. :)
  • mvrx - Saturday, February 23, 2008 - link

    AMD and Nvidia need to get off this proprietary approach to adding multiple GPUs to a system. You can't expect to ever really scale being limited to 16x 2.0 PCIe slots.

    I propose an architecture based off of optical 100GbE where there is a single header card (that would likely be PCIe 2.0 16x card) that interconnects with GPU cards that don't have a PCIe connector. Instead just a power and optical connector. Using switches it should be possible to add 128 or more optically interconnected GPU devices.

    This may sound insane, but its not. Thanks to some recent innovations from IBM and their 32nm partner group, mass produced 100GbE chips will be a possibility soon. Even the 40GbE over copper standard would work here, but I see no reason to stay with copper. If USB3 can have cheap optical, we can certainly adopt a new standard for mainstream 100GbE.

    Watch for an announcement in 09 about the playstation 4 PS4 possibly getting an optical interconnect solution. With the right development, an array of PS4's (connected via a 100GbE NIC) could act as a video processing array for a PC.

    I really don't want to see PCIe 3.0 come to light. If these new 32nm optical interconnect technologies are as cheap as I think they can be, we can say goodbye to electrical restrictive interconnects and finally start scaling enthusiast platforms in a more linear fashion.
  • marsbound2024 - Saturday, February 23, 2008 - link

    " However, those who decide to use 3-way SLI will lose the ability to physically install cards in the single PCI Express x1 slot and the two PCI slots."

    What if they are single-slot solutions?
  • Gary Key - Saturday, February 23, 2008 - link

    I will change that statement to differentiate between custom solutions and the standard 8800GTX/Ultra cooling options.
  • mcBullet - Saturday, February 23, 2008 - link

    As of right now nVidia has no single slot cards that are enabled for 3x Sli.

Log in

Don't have an account? Sign up now