Why use light instead of copper?

High speed PCB design is already a science in and of itself. Timing requirements dictate equal path lengths for a growing number traces between components, resulting in the ever familiar squiggly designs on desktop motherboards. Higher frequencies of course also require considerations about wave propagation through the medium - just like a transmission line. At 1 GHz, for example, one wavelength is about 0.14 meters for copper traces on FR-4, one of the most common PCB materials. 

There’s also propagation delay - for wires, commonly given as 1 ns for every 6 inches of length, though different media have different wave propagation speeds due to the dielectric of the medium. I could go on about the challenges of very high frequency circuit design - which I’m not an expert in by any stretch of the mind.

The end takeaway is that traditional microelectronic designs are becoming physically constrained in size with higher and higher frequencies. You physically need to have the CPU close to the memory controller, northbridge, and other components for it to work at frequencies people expect. Move it relatively far (one wavelength) away, and the design challenges start to stretch microelectronics to its limit. Ultimately, designs sacrifice speed for distance, or vice versa.

The advantages of using optical interconnects instead of traditional copper traces are numerous. Using light instead of copper promises vastly higher bandwidth, reduced latency, resistance to electromagnetic interference, and potentially even power savings. 

All of these reasons make using optical, silicon waveguides instead of copper traces an obvious choice. Instead of using numerous copper traces to connect the CPU to the northbridge, for example, one could envision using a single optical fiber. Or having many CPUs on one massive board connect to a chipset located even meters of path distance away. Or even have one room full of just CPUs and another room full of memory.

What Intel demonstrated on Tuesday is a working example of just that - an optical interconnect fabricated using the current traditional CMOS process, for connecting conventional electronics. Effectively an optical bus on silicon.

What’s different about Intel’s demonstration is that the lasers themselves are hybrid silicon.

50Gbps over silicon photonics Hybrid Silicon - Photonics systems with "PC Board" manufacturing
Comments Locked

42 Comments

View All Comments

  • toktok - Sunday, August 1, 2010 - link

    Pick 'n' mix motherboards!
  • joshv - Monday, August 2, 2010 - link

    Wonder if different parts of a chip couldn't use optical interconnects to talk to other parts of the chip, basically in free space above the chip. Basically just point the detectors and emitters at each other. The nice thing is that the optical paths can intersect each other with no penalty. Electrical paths have to be routed around each other in the various layers of the chip.
  • Shadowmaster625 - Monday, August 2, 2010 - link

    Doesnt this make it possible to place 8GB of DDR3 onto a single chip? Just stack a bunch of dies and wire them all to a silicon photonic transmitter, then connect that directly to the cpu. Also, shouldnt this make it possible to stack all the NAND flash you'd ever need onto one die? And then SSDs can be made with one memory chip. Replace SATA with silicon photonics, and it should be possible to have a 100GB/sec SSD. In other words, there would be no need for RAM at all...
  • GullLars - Wednesday, August 4, 2010 - link

    There's a couple of fundamental problems with your last part.
    First is the method of read/write and R/W asymmetry of NAND.
    Second is latency. NAND is roughly 1000 times slower than DRAM.

    It would be great for scaling bandwidth though, but then you have the problem of data parallelism and transfer sizes...
    You would also need processing power for NAND upkeep (wear leveling, ECC, block management, channel management, etc), which would be a lot at 100GB/s.
    Today, the fastest NAND dies are around 100MB/s(+?) for reads, so you would need to manage 10.000 dies (which wouldn't fit one package BTW XD) for 100GB/s.
  • EddyKilowatt - Monday, August 2, 2010 - link

    ... whatever you do, don't dumb down AnandTech! This was perfect for a lot of geeky folks out here. Who else puts things like "Laser = gain medium + resonator + pump" in one non-stuffy, easily understood, non-patronizing paragraph?

    A few decades ago when I was a cub engineer, my mentor predicted almost exactly this as the logical evolution of inter-chip links. I hope he's still around to see this finally hit the market.

    How is Intel going to handle the I/O's to these optically-enabled chips? It's the only box on the "Siliconizing" viewgraph without a picture! Fiber is cheap but fiber termination is ex-pen-sive...
  • Cogman - Tuesday, August 3, 2010 - link

    Optics have far more benefits over just "really high speed" For starters, optical cables can be VERY close together without causing any sort of interference. This is a big plus for people like motherboard designers. Now, they could stick ram anywhere on the board (not just right next to the CPU) They can overlap connectors, do whatever. It makes a big difference.

    Besides basically having no crosstalk / interference problems, Optical transmission has the added benefit of being able to send multiple signals down the same line (we probably aren't to the point of being able to realize that.) It is entirely possible that we could get to the point of having two optics lines running from the ram, one for upstream, the other for down. Each having the benefit of being able to send full 256/512 (or higher) bits of information in a single transmission cycle. We send data serially now because of interference problems, with optics, that would no longer be an issue, parallel data transmission would easily be able to overtake its serial counterpart.

    Though, all the ultrahigh speed data transmission in the world doesn't mean a thing if the processor can't crunch the numbers fast enough.
  • has407 - Wednesday, August 4, 2010 - link

    > Now, they could stick ram anywhere on the board (not just right next to the CPU) They can overlap connectors, do whatever.

    There's still speed-of-light constraints. For the foreseeable future, the closer, the better (that's the primary reason for the CRAY-1 physical structure--to reduce interconnect distance, not because it looked cool).

    > Optical transmission has the added benefit of being able to send multiple signals down the same line (we probably aren't to the point of being able to realize that.)

    We've been there and done that for years. It's called WDM or DWDM. Virtually all fiber-based communication today multiplexes multiple optical wavelengths/signals across a single fiber. That's how we get much more bandwidth without having to lay new fiber.

    > We send data serially now because of interference problems, with optics, that would no longer be an issue, parallel data transmission would easily be able to overtake its serial counterpart.

    Ummm...no. Parallel interconnects, regardless of medium, suffer from skew and the need for deskew. That always has, and always will be, a problem for parallel; optical interconnects do not fundamentally change the equation (although they minimize some problems). Thus the desire to reduce the number of parallel interconnects by using a smaller number of faster serial interconnects. Optics offers the option of significantly increasing serial interconnect speed, thus reducing the need for parallel interconnects and their associated problems.

    I.e., the number of bits/sec that can be transferred over a single "wire" (or in this case fiber), minimizes the need for parallel interconnects. Maybe a few niche apps will need the augmented bandwidth that parallel can provide, but I have my doubts... Will WDM/DWDM scale fast enough? I bet it will scale faster/cheaper than parallel interconnects, at least for 99% of the market.
  • jpbattaille - Thursday, August 5, 2010 - link

    Totally agree with this comment - the article seems to make abstraction of these facts.

    Optical speed of light in fiber is 200 m/s, in vacuum 300 m/s.
  • jpbattaille - Thursday, August 5, 2010 - link

    Sorry, 20 cm /ns and 30 cm/ns
  • PlugAndTweak - Wednesday, August 4, 2010 - link

    This sounds very cool and promising on the paper.
    But what does it really mean to us as single end users?
    This isn't the components themselves we are talking about that will get silicon photonics, that was WAY ahead in the future.
    It's the interconnects.
    I bet that even if we would get all the interconnects in the mobo;s with Silicon Photonics we wouldn't notice any dramatic differences even in very intensive stuff like Videoediting, 3d-rendering, software synthesizers and plug-ins in DAW;s and so on.
    And would SSD;s REALLY be faster than any comparable technology today?
    Even theoretically?
    And take a look at tests where different memory speeds are done, there has always been minimal differences (often just 1-2 percent in average) when doing these tests. Increasing the bandwitdth in the interconnects won't have any dramatic difference in that aspect.
    Not for us single power users.
    For the big players with setups of loads of CPU;s and other components I could see the speed advantage.
    And there are the other mentioned advantages for the single power user, like less power consumed, possibly smaller motherboards (if they find inexpensive cooling solutions).
    Then there are all the advantages for the manufacturers with less interference and all the other stuff mentioned.
    I really think that the ones that will benefit most from this is the manufacturers of the motherboard, much more than us single users.
    Not until they get to the point of using it on the component level.

Log in

Don't have an account? Sign up now