Silicon Photonics – Cisco and Intel see "Light at the End of the Tunnel"


Among the many presentations on Silicon Photonics (SiPh) at the excellent 2013 Open Server Conference,  two were of special interest:

  • Joel Goergen of Cisco called for a radically new data center architecture that used SiPh to interconnect components and modules, rather than circuit cards or racks of equipment.
  • Mario Paniccia of Intel focused on using SiPh for rack level interconnects, but called attention to total solution cost as a critical issue to be solved.

The other presentations – from SiPh component vendors, potential customers (Oracle), and a market researcher (Ovum)- all agreed on the promise and potential of SiPh, but differed greatly on the technology details, link distance, receiver vs transceiver, and “sweet spot” for a volume market.

Silicon Photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable.  It’s been in the research stage at Intel for over 10 years, while a few component/module companies have already shipped SiPh receivers (but not integrated transmitter/receivers or transceivers yet).

For a description of all the SiPh (and other) presentations at the 2013 Open Server Summit, please visit their web site for the conference program.  You will also find catchy quotes there like: “Only silicon photonics holds the promise of making 100G more cost-effective than 10G and 40G nets,” by Andy Bechtolsheim, Arista Networks, Oct 2012.

Using Integrated Silicon Photonics for Higher Speed Interconnect Technology – A Frame Work for The Next Generation, by Joel Goergen of Cisco:

Exponentially increasing Internet traffic along with the Internet of Things (IoT) will place a huge burden on next generation, cloud resident data centers. The new requirements include: higher system performance, coping with higher power consumption via more effective cooling concepts, faster interconnect speeds (between components, modules, cards, and racks). The challenge for designers is to provide faster compute/storage/networking systems with more effective bandwidth/performance per Watt and with highly efficient cooling. Hopefully, all that can be provided at improved cost/ performance/power efficiency to the owner of the data center.

Goergen sees the prime use of SiPh as a high speed/low latency interconnect for individual components and modules used for compute, memory and storage (possibly networking as well, but that was not mentioned). Attributes of this future system include: lots of links, very low latency, lower power consumption, minimum protocols, secure and easy to scale.  

The realization of that vision is shown in the figure below

Dis-aggregated set of things becomes interconnected through Silicon Photonics.
Silicon Photonics Simplifying Interconnections

A huge advantage of this “SiPH to connect everything” approach is “intelligent power,” which includes power efficiency, monitoring and capability to repurpose power from one area to another. The focus would be on “power distribution to the chip level,” according to Joel. His stated bottom line was that “total ASIC power is screaming for alternative system architectures.”

An illustration of “intelligent power” within a future data center is shown in illustration below:

Silicon Photonics has the potential to enable intelligent powering, improving overall data center power efficiency.
SiPh WIll Improve Power Efficiency

The advantages of this novel approach include optimized cooling in a decentralized environment and more effective use of Data Center facility space.   Joel proposed to localize the CPU/Memory/Storage farms and contain the heat based on that area of the building.  The result would be to keep like components together, be able to change farm types as the Data Center grows or as needs change. It would also better manage costs for electrical and cooling in distribution. He said that such a distributed architecture would drive new, enhanced cooling technologies.

Author’s Note:

The emphasis on power and cooling is of utmost importance as this is often cited as the number one problem with large, high performance Data Centers. Joel is proposing use of SiPh to mitigate that problem.

In summary, this presentation proposes use of SiPh for a high speed/low latency interconnect for components and modules within Data Center equipment.  The concept of cards and racks are replaced by interconnected components/modules.

The benefits were said to include:

  • Drive Higher Voltages to the chip due to reduction in the DC Voltage (IR) drop
  • Inteligent Power Distribution-  Not just Efficiency or Monitoring
  • Liquid Cooling at the chip / at the system- Hotter components and higher densities are coming
  • Dis-Integrate the Data Center Components – Target the most effective way to organize, optimize power and cooling using Photonic Interconnects as the frame work

Revolutionizing Computing and Communications with Silicon Photonics, by Mario Paniccia-PhD Physics of Intel

Intel claims that Silicon photonics offers a way to extend silicon manufacturing to higher speeds and thus provide low cost opto-electronic solutions and tremendous bandwidth. The results would be advances in a wide range of applications in servers, high-performance computing, and networking. Recent developments point to practical applications in the near term. For example, a new optical connector and fiber technology support data rates up to 1.6 terabits per second.

Mario unequivocally stated that the “sweet spot” for SiPh deployment was rack level interconnects on the order of six to 12 inches. [Other SiPh speakers talked about distances of 2km and more].  He indicated that Mega Data Centers, High Performance Computing (HPC) and the NSA Data Center in Utah were all interested in SiPh for that application. SiPh promises include: increased performance and energy efficiency with lower system cost and thermal density. This will “enable new form factors,” he added.

Paniccia claims that any interconnect link >= 25G b/sec at a distance of >= 2m will need a photonic link. But such fiber optic interconect links are expensive and dominate HPC/Mega Data Center costs. The challenge is total systems cost, which includes the photonics (laser, packaging, assembly) as well as the cables and connectors.  “Current cost constraints limit use of photonics in and around servers,” Mario said.

According to Paniccia,  “The goal of SiPh is to bring the advantages of semiconductor processing to optical communications.  In particular, high volume, low cost, highly integrated functions and scalable speeds.”

“Intel has built optical devices in silicon that operate >40G b/sec,” according to Mario.  A crucial point is that SiPh building blocks are now being integrated into a complete system.  These include: lasers, data encoders, light detectors, and other functions.  Intel is using a “hybrid Silicon laser” along with advanced packaging and assembly techniques. This is in sharp contrast to the other SiPh vendors which all use separate off-chip laser light sources.

In 2009, Intel demonstrated a 50G b/sec SiPh link that was organized as 4 wavelengths X 12.5G b/sec/ channel.  Silicon germanium was used as a photo-detector. Intel quietly pursued their research without making other public demonstrations until this year.

  1. This January, Intel and Facebook announced they were collaborating on “Future Data Center Rack Technologies
  2. In April  2013, Intel showed a live demo of a 100G b/sec SiPh link at their IDF conference.  It was claimed to be “a completely integrated module that includes silicon modulators, detectors, wave-guides and circuitry.” [Intel believes this is the only module in the world that uses a hybrid silicon laser.  For more on this topic see Panel at the end of the article].
  3. Intel CTO Justin Rattner also displayed the new photonics cable and connector that Intel is developing with Corning at IDF. This new connector has fewer moving parts, is less susceptible to dust and costs less than other photonics connectors. Intel and Corning intend to make this new cable and connector an industry standard. Rattner said the connector can carry 1.6 terabits/sec. You can watch the video here 
  4. In September 2013, Intel showcased the above referenced MXC cable and connector developed with Corning, capable of 1.6 terabits/sec per cable with up to 64 fibers. They also demonstrated a 300m SiPh link @ 25G b/sec over multimode fiber.
  5. At ECOC later that month, Intel demonstrated 25G b/sec SiPh transmission but at a much longer 820m.

But what really significant is Intel’s emphasis that a total systems approach, was needed to make SiPh a viable interconnect technology.  That includes photonics, cables, connectors, and structured wiring/assembly which includes optical patch panels to interconnect servers in a rack.

Mario concluded by saying that Intel plans to make SiPh real and that the future for the technology was very bright. We take his words very seriously!

Closing Comment and Analysis:

This author has followed Intel closely since first applying for a job there in the summer of 1973. I’ve also worked for the company as a consultant in the late 1980s and mid 1990s.  We have never before seen Intel pursue a research project for more than three years without either bringing it to market or killing it (neural computing was a late 1980s hot project that was killed as that market was not there- and still isn’t).  SiPh is quite an exception to that practice as it’s been in the research phase at Intel for over 10 years!

But Intel may be announcing SiPh products very soon.  This past January, they announced they’re working with Facebook on 100G b/sec rack interconnects for Data Centers.

And we couldn’t help notice this Intel job advertisement for a SiPh Market Development Manager.

Would Intel be hiring such a person if a product announcement was not forthcoming in the near future?  

SiPh could be one of the most exciting developments in large Data Centers and HPC in years.  It could aid, abet and accelerate the movement to cloud computing.  The technology also has the potential to drastically change the architecture of compute, memory, storage and network equipment within the Data Center, as Joel Goergen of Cisco proposes.  That would be creative destruction for Cisco who has a huge market in all types of Data Center equipment.

–>Stay tuned for more SiPh developments coming this year and next.  We are watching all aspects of this technology very closely.

For a list of Intel’s SiPh research achievements please visit:

PANEL: Hybrid Silicon Laser Project

Intel and the University of California Santa Barbara (UCSB) announced the demonstration of the world’s first electrically driven Hybrid Silicon Laser. This device successfully integrates the light-emitting capabilities of Indium Phosphide with the light-routing and low cost advantages of silicon. The researchers believe that with this development, silicon photonic chips containing dozens or even hundreds of hybrid silicon lasers could someday be built using standard high-volume, low-cost silicon manufacturing techniques. This development addresses one of the last hurdles to producing low-cost, highly integrated silicon photonic chips for use inside and around PCs, Servers, and Data Centers.

0 thoughts on “Silicon Photonics – Cisco and Intel see "Light at the End of the Tunnel"

  1. Thanks Alan for this extensive follow-up on your earlier article where you mentioned this revolutionary development. There are many intriguing things here.

    One that I am still trying to get my head wrapped around is the idea of component Dis-Integration and how Silicon Optics will allow for connecting of components. It almost seems like a metaphor for what the Internet has done to so many industries.

    The other part of that I am trying to get my head around is what the physical form will look like. I am reminded of what I saw at CES where the module computer maker Xi3 introduced their data center on wheels. Literally, they connect little cubes together and claim greater efficiency that a traditional rack mount server.

    Regardless, it seems like the first markets for the SiPh will be in the interconnect applications and not in any of the last mile, FTTH applications.

    1. Ken, Thanks for your comments. It’s difficult to get your head into any truly new technology, but this one is the most difficult I’ve seen in decades.
      For sure, the first volume market for SiPh will be interconnecting racks within a Data Center. The concept of using SiPh to interconnect individual modules/ASICs (as Cisco suggests) could be years away or never happen. A long reach application for interconnecting rooms within a large data center- say up to 1 or 2 km-also seems reasonable. But no one was talking about last mile/FTTH applications at 2013 Open Server Summit.

  2. One of the speakers pointed out that transmissions exceeding 100 Gbs-meters would inevitably require Silicon Photonics. So, long distances are already optical because of the kilometers involved. By the time we reach a terabit per second, all major interconnects will be optical. But, a Tbs is many years away.

    1. John, Thanks for your comment. During the end of Mario’s presentation, he said that SiPh could “scale-up” or “scale-out” to reach Tbs speeds in the future. But that has yet to be proven.

  3. Thanks for the article and the links. However, I remain quite skeptical about SiPh and the surrounding hype. Firstly, the “hybrid silicon laser” is described as using conventional III-V lasers (GaAs, GaN, InP) as the sources and using silicon to direct the lasers. I don’t see how this is breakthrough. All III-V have been coupled into silica (SiO2) fibers and waveguides for a long time. Also, silicon CMOS drivers have been used for short-reach (60 meter) VCSEL (GaAs) lasers for a long time – this configuration was used at Intel in 2002-4. Silicon makes an excellent photo-detector up to about 1 micron wavelength – which includes the GaAs laser range. No question about this. It does not make a good detector at longer wavelengths, such as used in longer-reach fiber-optics. InGaAs detectors are used at 1300 and 1550 nm. SiGe is NOT a photo-detector. SiGe can be used for laser drivers and Germanium (Ge) can be used as a photo-detector.

    I am sure that Intel has supported SiPh for so long is that they must “save face.” They have published so much on the subject – and been criticized by experts for stretching the definition of “SiPh” for some time.

    With a reach of 6 to 12 inches almost anything could be used to transmit light, even at very high data rates. In fact, many companies are using very-cheap polymer optical fibers (much, much cheaper than silica) and transmitting multi-Gb/s over much longer than that (meters). So, call me a “nay-sayer” if you like. But, I remain skeptical. Thanks

  4. Announced on Nov 9, 2013: Fujitsu lights up servers with Intel’s silicon photonics. Intel and Fujitsu have been showing off a new server which uses Intel silicon photonics technology with an Optical PCI Express (OPCIe) design. This will allows the storage and networking to be split up and moved away from the CPU motherboard which will mean that components are easier to cool.

    Great illustration of Intel’s SiPh Research at:

    Intel SiPh Marketing Manager Victor Krutul, Fujitsu Lights up PCI Express with Intel Silicon Photonics

    1. Further evidence that this technology is close to prime-time is the recent investment by Comcast Ventures and Cisco among others into Compass-EOS.

      “Compass-EOS launched its r10004 router in March 2013, announcing the commercial shipment of its icPhotonics-based routers, based on the world’s first shipping photonic chip-to-chip interconnect. Leading global service providers including NTT Communications and major research and education organizations including CERNET have successfully deployed the company’s routers.”

  5. It looks like HP will commercialize Silicon Photonics or some variation in 2016 with The Machine. With power consumption of less than 1/5 of what an equivalent performance computer would require, it promises to disrupt the current data center model. The driver for this is the explosion of data that HP projects will need to be processed, stored and manipulated as the Internet of Things becomes everything.

    In this presentation from 2013, an HP representative talks about the foundational technologies needed for The Machine. Around 16:30, he mentions Silicon Photonics:

    As important as the hardware, is the operating system which will also be new and will be open source. Martin Fink, the speaker in the first referenced link, suggests that operating systems have been stagnant for decades.

    One of the more provocative comments made by Fink is that HP is one of the few companies left that has “systems research”. He indicates that they can bring together the disparate technologies, such as System on Chips, Silicon Photonics and operating system and create something truly innovative, instead of just an incremental improvement labeled as “innovative.” I suppose we will find out in less than 2 years.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.