Scaling up (bitrate) and out (WDM)

 

Intel demonstrated an effective speed of 12.5 gigabits/s per wavelength. Using wavelength division multiplexing (WDM), one can increase the number of wavelengths (or channels) in the fiber, effectively adding more bandwidth. Intel calls this 'scaling out.' This is what’s done in virtually every long haul fiber optic telecommunications network. Even though each wavelength in conventional teleco systems might only be 10 or 20 gigabits/s, the fiber itself can carry many wavelengths, resulting in much higher aggregate bandwidth for a single fiber. 

 
Wavelength Division Multiplexing (WDM)
 
One can also increase the bitrate of each channel, intel calls this 'scaling up.'
 
 
Intel’s demonstration is effectively a vastly miniaturized end to end fiber optic telecommunications system. Instead of cities, you’ve got CPUs and ICHs. Instead of a rack of modulators and optical multiplexers, you’ve got one chip on both ends. The links between components are inevitably going optical, the same way growing need for more and more bandwidth drove almost all terrestrial communications into fiber optic networks. 
 
Both this 50G silicon photonics link and Light Peak are at the core of Intel’s overall I/O strategy for the future, one where copper data links are supplanted almost entirely by optical fibers. No doubt copper will still live on - one still needs to bring power to devices - but in a reduced capacity. 
 
Light Peak is what Intel is calling its first consumer implementation of an external optical bus, and will start at 10 gigabits per second, and will start appearing in products sometime in 2010. The bus will support multiple protocols as well - in theory, you could send USB, ethernet, audio, and video at super fast rates over one unified optical connection. If you think about the same thing happening but on the motherboard, you’ll get a sense for the beginning of how optical data connections will supplant copper.
 
Hopefully some day we’ll be talking about replacing electrons in microprocessors with photons, and computing entirely with light. Optical computing poses a host of challenges before it can be fully realized, but eventually Intel might replace ICs with photonic equivalents entirely. That future is still decades away, but until then we’ll definitely be using optical busses for moving data around. 
Hybrid Silicon - Photonics systems with "PC Board" manufacturing
Comments Locked

42 Comments

View All Comments

  • toktok - Sunday, August 1, 2010 - link

    Pick 'n' mix motherboards!
  • joshv - Monday, August 2, 2010 - link

    Wonder if different parts of a chip couldn't use optical interconnects to talk to other parts of the chip, basically in free space above the chip. Basically just point the detectors and emitters at each other. The nice thing is that the optical paths can intersect each other with no penalty. Electrical paths have to be routed around each other in the various layers of the chip.
  • Shadowmaster625 - Monday, August 2, 2010 - link

    Doesnt this make it possible to place 8GB of DDR3 onto a single chip? Just stack a bunch of dies and wire them all to a silicon photonic transmitter, then connect that directly to the cpu. Also, shouldnt this make it possible to stack all the NAND flash you'd ever need onto one die? And then SSDs can be made with one memory chip. Replace SATA with silicon photonics, and it should be possible to have a 100GB/sec SSD. In other words, there would be no need for RAM at all...
  • GullLars - Wednesday, August 4, 2010 - link

    There's a couple of fundamental problems with your last part.
    First is the method of read/write and R/W asymmetry of NAND.
    Second is latency. NAND is roughly 1000 times slower than DRAM.

    It would be great for scaling bandwidth though, but then you have the problem of data parallelism and transfer sizes...
    You would also need processing power for NAND upkeep (wear leveling, ECC, block management, channel management, etc), which would be a lot at 100GB/s.
    Today, the fastest NAND dies are around 100MB/s(+?) for reads, so you would need to manage 10.000 dies (which wouldn't fit one package BTW XD) for 100GB/s.
  • EddyKilowatt - Monday, August 2, 2010 - link

    ... whatever you do, don't dumb down AnandTech! This was perfect for a lot of geeky folks out here. Who else puts things like "Laser = gain medium + resonator + pump" in one non-stuffy, easily understood, non-patronizing paragraph?

    A few decades ago when I was a cub engineer, my mentor predicted almost exactly this as the logical evolution of inter-chip links. I hope he's still around to see this finally hit the market.

    How is Intel going to handle the I/O's to these optically-enabled chips? It's the only box on the "Siliconizing" viewgraph without a picture! Fiber is cheap but fiber termination is ex-pen-sive...
  • Cogman - Tuesday, August 3, 2010 - link

    Optics have far more benefits over just "really high speed" For starters, optical cables can be VERY close together without causing any sort of interference. This is a big plus for people like motherboard designers. Now, they could stick ram anywhere on the board (not just right next to the CPU) They can overlap connectors, do whatever. It makes a big difference.

    Besides basically having no crosstalk / interference problems, Optical transmission has the added benefit of being able to send multiple signals down the same line (we probably aren't to the point of being able to realize that.) It is entirely possible that we could get to the point of having two optics lines running from the ram, one for upstream, the other for down. Each having the benefit of being able to send full 256/512 (or higher) bits of information in a single transmission cycle. We send data serially now because of interference problems, with optics, that would no longer be an issue, parallel data transmission would easily be able to overtake its serial counterpart.

    Though, all the ultrahigh speed data transmission in the world doesn't mean a thing if the processor can't crunch the numbers fast enough.
  • has407 - Wednesday, August 4, 2010 - link

    > Now, they could stick ram anywhere on the board (not just right next to the CPU) They can overlap connectors, do whatever.

    There's still speed-of-light constraints. For the foreseeable future, the closer, the better (that's the primary reason for the CRAY-1 physical structure--to reduce interconnect distance, not because it looked cool).

    > Optical transmission has the added benefit of being able to send multiple signals down the same line (we probably aren't to the point of being able to realize that.)

    We've been there and done that for years. It's called WDM or DWDM. Virtually all fiber-based communication today multiplexes multiple optical wavelengths/signals across a single fiber. That's how we get much more bandwidth without having to lay new fiber.

    > We send data serially now because of interference problems, with optics, that would no longer be an issue, parallel data transmission would easily be able to overtake its serial counterpart.

    Ummm...no. Parallel interconnects, regardless of medium, suffer from skew and the need for deskew. That always has, and always will be, a problem for parallel; optical interconnects do not fundamentally change the equation (although they minimize some problems). Thus the desire to reduce the number of parallel interconnects by using a smaller number of faster serial interconnects. Optics offers the option of significantly increasing serial interconnect speed, thus reducing the need for parallel interconnects and their associated problems.

    I.e., the number of bits/sec that can be transferred over a single "wire" (or in this case fiber), minimizes the need for parallel interconnects. Maybe a few niche apps will need the augmented bandwidth that parallel can provide, but I have my doubts... Will WDM/DWDM scale fast enough? I bet it will scale faster/cheaper than parallel interconnects, at least for 99% of the market.
  • jpbattaille - Thursday, August 5, 2010 - link

    Totally agree with this comment - the article seems to make abstraction of these facts.

    Optical speed of light in fiber is 200 m/s, in vacuum 300 m/s.
  • jpbattaille - Thursday, August 5, 2010 - link

    Sorry, 20 cm /ns and 30 cm/ns
  • PlugAndTweak - Wednesday, August 4, 2010 - link

    This sounds very cool and promising on the paper.
    But what does it really mean to us as single end users?
    This isn't the components themselves we are talking about that will get silicon photonics, that was WAY ahead in the future.
    It's the interconnects.
    I bet that even if we would get all the interconnects in the mobo;s with Silicon Photonics we wouldn't notice any dramatic differences even in very intensive stuff like Videoediting, 3d-rendering, software synthesizers and plug-ins in DAW;s and so on.
    And would SSD;s REALLY be faster than any comparable technology today?
    Even theoretically?
    And take a look at tests where different memory speeds are done, there has always been minimal differences (often just 1-2 percent in average) when doing these tests. Increasing the bandwitdth in the interconnects won't have any dramatic difference in that aspect.
    Not for us single power users.
    For the big players with setups of loads of CPU;s and other components I could see the speed advantage.
    And there are the other mentioned advantages for the single power user, like less power consumed, possibly smaller motherboards (if they find inexpensive cooling solutions).
    Then there are all the advantages for the manufacturers with less interference and all the other stuff mentioned.
    I really think that the ones that will benefit most from this is the manufacturers of the motherboard, much more than us single users.
    Not until they get to the point of using it on the component level.

Log in

Don't have an account? Sign up now