Packaging and Design

The GigaX1108 comes packaged with the switch itself, power cable, instruction booklet, and wall mount screws.




Click to enlarge.


The switch itself is colored charcoal throughout with the exception of white colored text for port numbers, status, speed, duplex, and power indicators. On the back of the switch are 8 Ethernet ports and a plug for the AC adapter. The switch is fan-less and as such, there are ventilation grills on the top, side, and bottom for heat dissipation.




Click to enlarge.


The front of the switch has a set of LEDs for each port that is clearly labeled and self-explanatory. The Status/Activity light is solid green when a connection is established and blinking when data is being transmitted or received. The Speed LED is green when connected at 1000 Mbps, Amber at 100 Mbps, and off at 10 Mbps or if no link is detected. The Duplex LED is amber when the switch is operating in full-duplex mode, blinking if it is half-duplex, and off when in half-duplex with no collisions.

One feature that would have been nice is the inclusion of another color to indicate 10 Mbps connectivity and collisions in full-duplex mode. Also, the omission of the green LED for the duplex setting is strange. Given a perfect network configuration, you would like to see all greens on the switch.

Flipping the switch onto its back, we see that there are two circular non-slip plastic feet in the center. One uncommon feature is that the two metallic discs beneath the two center feet are actually magnets. With all 8 ports of the switch connected, the switch was still able to stay attached to the metal rack without much problem. Although we wouldn't recommend attaching the switch with just the magnets, just in case a cable is pulled causing the switch to come crashing down, it is an available option. One place where it can be useful is if you are using a metallic computer case. On our typical metal beige cases, the switch stayed fairly snug with the magnets and the non-slip feet. Finally, near the four corners are 4 smaller circular feet with grooves for wall-mounting the switch in different orientations.

Opening the switch is easy. There are four small screws to remove and the top of the switch slides off. Taking a look inside the switch, we see two Marvell quad channel PHYs 88E1145-BBM PHYs coupled with a Marvell 88E6181 gigabit Ethernet switch and four Delta Electronics LF9401 dual port gigabit transformers. Please note that in the picture, the heatsinks have already been removed from the three Marvell chips.




Click to enlarge.


The 88E1145 transceiver is a single chip device containing four independent Gigabit Ethernet transceivers, which performs all the physical layer functions.




Click to enlarge.


The 88E6181 is part of the Marvell Link Street family of integrated low-powered networking devices designed for the SOHO market. The device itself is an 8-port GbE QoS switch integrating a high-performance switching fabric with four priority queues, a high-speed address look-up engine, eight interface ports supporting Serial Gigabit Media Independent Interface (SGMII), and 1 Mb of memory.




Click to enlarge.




Click to enlarge.


Here is a diagram of how the three Marvell chips work together.




Click to enlarge.


Product Specifications Network Performance
Comments Locked

12 Comments

View All Comments

  • starjax - Wednesday, August 11, 2004 - link

    Ghost is nortious as a network hog. I have a smc 16port, linksys 16 and 8 ports, and 3com office connect hub. all support 10/100. the switches are autonegotiating/autosensing store an forward. What I noticed is that the different models/brands have different overheads. I average 300MB/min on the 8port linksys. with the smc I get closer to 400 MB/min. The 16 port linksys has a few MB better performance. With Hard drives getting larger and larger it means I have more data to move when deploying new systems here at work. The quicker I can do it that happier I am.

    Back to my point I would be interested in doing head to head reviews and compare overhead/latnecy, through put, ect.

    Starjax
  • KristopherKubicki - Tuesday, August 10, 2004 - link

    Good work brian. Looks like the networking front is fairing better than the linux front at this point!

    http://anandtech.com/linux/showdoc.aspx?i=2158

    Kristopher
  • douglar - Tuesday, August 10, 2004 - link

    Ok, sounds good.

    I'm no linux expert either, just the one time that i was spec'ing out GbE nic's for servers, the local linux man kept telling me that I was being handicapped by the windows tcpip stack and file copy routines.

    I was able to get much higher GbE utilization by copying multiple files at the same time from hardware raid's with many disks. If you have access to some good raids, that might offset the single disk throughput issues somewhat.

    I'm also curious about what happens if you have GbE and 100baseTX running at the same time on the switch. Does it buffer well when one card is faster? Will one slow card funk the stew?

    If you have a smart switch that records such things, please report packet loss/ re-xmit numbers.

    I am very interesed in the Nforce3 250gb card, since it bypasses PCI.

  • BrianNg - Tuesday, August 10, 2004 - link

    #8
    Since this was the first switch review, I was not sure how detailed to be with the review. I was under a general impression to do some quick benchmarks to see how well it compares to a 10/100 switch.
    I am in the process of reviewing another gigabit switch and so far just from the tests I did with the ASUS switch, this switch's performance is lagging with very large files.
    The couple RAM drive software that I have seen only allows for the creation of 2 GB RAM drives. For files less than 2 GBs the file copy times between to computer with gig cards are under a minute each.
    For using operating system, my expertise is in Microsoft systems. I probably don't know enough of the other UNIX/LINUX systems to do a benchmarking properly.
    I should be getting additional NICs and tests machines in the near future.
    I'll try to see how much of your other suggestions can I accomodate in the next review.

    Also, please keep the comments coming. It helps me to know how detailed I should be when testing the hardware.

    Thanks,

    Brian
  • douglar - Tuesday, August 10, 2004 - link

    I was a little dissapointed with this review. While the disclosure about the test system was good, the only thing this review told me was that GbE is faster than 100base-tx and that the device in question didn't have any obvoius flaws. I really don't know if it is any better than other GbE switches, etc. As it is, I don't really think this article tested anything other than hard drive speed.

    I'd suggest the following improvements for the performance testing to identify/remove system bottlenecks-

    * Compare using more than one GbE switch
    * Use ram drives for at least some of the file copy tests
    * Compare using the fastest accepted system bus and nic. If we don't know the fastest, test with more than one type of NIC/bus.
    * Compare using more than one operating system

    This would tell me how this switch compares against other comparable switches, and would help remove/identify bottlenecks from the testing such as NIC, NIC system bus, NIC driver, operating system, and hard drive speed.

    Other than straight performance benchmarking, there are some other "tests of quality" that would have been nice to see:

    * Test with different speed nics
    * Test with max length cables for signal strength
    * Test with low quality cables
    * Test with multiple copies at the same time
    * Test with 4+ computers at the same time

  • douglar - Tuesday, August 10, 2004 - link

  • Dranzerk - Tuesday, August 10, 2004 - link

    Nice to know I can still use Cat5 network with a simple upgrade. Being lots of houses build with networks built in now Cat5 vs cat5e was actully was not a PUSH like cat5 was in the houseing market, so many houses today are still useing Cat5 for networks simply cause the only real diffrence is interference and you don't get than at all in Walls of house anyways :P
  • SpaceRanger - Tuesday, August 10, 2004 - link

    Being that AT is relatively new to Networking reviews, I felt this was a good first try at it. Keep up the good work Brian.
  • BrianNg - Tuesday, August 10, 2004 - link

    #1
    Let me check on that, and I'll get back to you.
    #3
    Thanks for the comments. The addition of additional clients is something planned for the future. I am in the process of adding additional equipment to the lab. As for the other tests that you recommended, I'll see how much of it I can squeeze into the next review.

    Thanks,

    Brian
  • Yozza - Tuesday, August 10, 2004 - link

    The review ends up more or less just testing the performance of the Intel Pro/1000 CT NIC rather than the switching fabric - showing that it can get pretty damn close to wire speed at 1Gbps, although you didn't test bi-directional throughput.

    How about also testing with multiple hosts, a range of frame sizes and traffic patterns, measurements of switching latency/jitter, head-of-line blocking, etc? I suspect the switch fabric would be rather more stressed with multiple hosts and smaller frame sizes, especially with data traversing between the two Marvell 4xGbE PHYs.

    Indeed, jumbo frame support would be useful for increasing throughput and reducing CPU overhead from packet segmentation, as mentioned in a previous comment, yet it is disappointing that the review doesn't mention or test this at all.

    The text of the review also mentions "The device itself is an 8-port GbE QoS switch integrating a high-performance switching fabric with four priority queues". How about testing the effectiveness of its QoS policing with 802.1p or ToS?

    I don't mean to criticize the reviewer - obviously some effort has been put into the review, but from a networking standpoint, it looks very amateurish and doesn't really test the product at all.

Log in

Don't have an account? Sign up now