Among the many presentations on Silicon Photonics (SiPh) at the excellent 2013 Open Server Conference, two were of special interest:
- Joel Goergen of Cisco called for a radically new data center architecture that used SiPh to interconnect components and modules, rather than circuit cards or racks of equipment.
- Mario Paniccia of Intel focused on using SiPh for rack level interconnects, but called attention to total solution cost as a critical issue to be solved.
The other presentations – from SiPh component vendors, potential customers (Oracle), and a market researcher (Ovum)- all agreed on the promise and potential of SiPh, but differed greatly on the technology details, link distance, receiver vs transceiver, and “sweet spot” for a volume market.
Silicon Photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable. It’s been in the research stage at Intel for over 10 years, while a few component/module companies have already shipped SiPh receivers (but not integrated transmitter/receivers or transceivers yet).
For a description of all the SiPh (and other) presentations at the 2013 Open Server Summit, please visit their web site for the conference program. You will also find catchy quotes there like: “Only silicon photonics holds the promise of making 100G more cost-effective than 10G and 40G nets,” by Andy Bechtolsheim, Arista Networks, Oct 2012.
Using Integrated Silicon Photonics for Higher Speed Interconnect Technology – A Frame Work for The Next Generation, by Joel Goergen of Cisco:
Exponentially increasing Internet traffic along with the Internet of Things (IoT) will place a huge burden on next generation, cloud resident data centers. The new requirements include: higher system performance, coping with higher power consumption via more effective cooling concepts, faster interconnect speeds (between components, modules, cards, and racks). The challenge for designers is to provide faster compute/storage/networking systems with more effective bandwidth/performance per Watt and with highly efficient cooling. Hopefully, all that can be provided at improved cost/ performance/power efficiency to the owner of the data center.
Goergen sees the prime use of SiPh as a high speed/low latency interconnect for individual components and modules used for compute, memory and storage (possibly networking as well, but that was not mentioned). Attributes of this future system include: lots of links, very low latency, lower power consumption, minimum protocols, secure and easy to scale.
The realization of that vision is shown in the figure below
A huge advantage of this “SiPH to connect everything” approach is “intelligent power,” which includes power efficiency, monitoring and capability to repurpose power from one area to another. The focus would be on “power distribution to the chip level,” according to Joel. His stated bottom line was that “total ASIC power is screaming for alternative system architectures.”
An illustration of “intelligent power” within a future data center is shown in illustration below:
The advantages of this novel approach include optimized cooling in a decentralized environment and more effective use of Data Center facility space. Joel proposed to localize the CPU/Memory/Storage farms and contain the heat based on that area of the building. The result would be to keep like components together, be able to change farm types as the Data Center grows or as needs change. It would also better manage costs for electrical and cooling in distribution. He said that such a distributed architecture would drive new, enhanced cooling technologies.
The emphasis on power and cooling is of utmost importance as this is often cited as the number one problem with large, high performance Data Centers. Joel is proposing use of SiPh to mitigate that problem.
In summary, this presentation proposes use of SiPh for a high speed/low latency interconnect for components and modules within Data Center equipment. The concept of cards and racks are replaced by interconnected components/modules.
The benefits were said to include:
- Drive Higher Voltages to the chip due to reduction in the DC Voltage (IR) drop
- Inteligent Power Distribution- Not just Efficiency or Monitoring
- Liquid Cooling at the chip / at the system- Hotter components and higher densities are coming
- Dis-Integrate the Data Center Components – Target the most effective way to organize, optimize power and cooling using Photonic Interconnects as the frame work
Revolutionizing Computing and Communications with Silicon Photonics, by Mario Paniccia-PhD Physics of Intel
Intel claims that Silicon photonics offers a way to extend silicon manufacturing to higher speeds and thus provide low cost opto-electronic solutions and tremendous bandwidth. The results would be advances in a wide range of applications in servers, high-performance computing, and networking. Recent developments point to practical applications in the near term. For example, a new optical connector and fiber technology support data rates up to 1.6 terabits per second.
Mario unequivocally stated that the “sweet spot” for SiPh deployment was rack level interconnects on the order of six to 12 inches. [Other SiPh speakers talked about distances of 2km and more]. He indicated that Mega Data Centers, High Performance Computing (HPC) and the NSA Data Center in Utah were all interested in SiPh for that application. SiPh promises include: increased performance and energy efficiency with lower system cost and thermal density. This will “enable new form factors,” he added.
Paniccia claims that any interconnect link >= 25G b/sec at a distance of >= 2m will need a photonic link. But such fiber optic interconect links are expensive and dominate HPC/Mega Data Center costs. The challenge is total systems cost, which includes the photonics (laser, packaging, assembly) as well as the cables and connectors. “Current cost constraints limit use of photonics in and around servers,” Mario said.
According to Paniccia, “The goal of SiPh is to bring the advantages of semiconductor processing to optical communications. In particular, high volume, low cost, highly integrated functions and scalable speeds.”
“Intel has built optical devices in silicon that operate >40G b/sec,” according to Mario. A crucial point is that SiPh building blocks are now being integrated into a complete system. These include: lasers, data encoders, light detectors, and other functions. Intel is using a “hybrid Silicon laser” along with advanced packaging and assembly techniques. This is in sharp contrast to the other SiPh vendors which all use separate off-chip laser light sources.
In 2009, Intel demonstrated a 50G b/sec SiPh link that was organized as 4 wavelengths X 12.5G b/sec/ channel. Silicon germanium was used as a photo-detector. Intel quietly pursued their research without making other public demonstrations until this year.
- This January, Intel and Facebook announced they were collaborating on “Future Data Center Rack Technologies“
- In April 2013, Intel showed a live demo of a 100G b/sec SiPh link at their IDF conference. It was claimed to be “a completely integrated module that includes silicon modulators, detectors, wave-guides and circuitry.” [Intel believes this is the only module in the world that uses a hybrid silicon laser. For more on this topic see Panel at the end of the article].
- Intel CTO Justin Rattner also displayed the new photonics cable and connector that Intel is developing with Corning at IDF. This new connector has fewer moving parts, is less susceptible to dust and costs less than other photonics connectors. Intel and Corning intend to make this new cable and connector an industry standard. Rattner said the connector can carry 1.6 terabits/sec. You can watch the video here
- In September 2013, Intel showcased the above referenced MXC cable and connector developed with Corning, capable of 1.6 terabits/sec per cable with up to 64 fibers. They also demonstrated a 300m SiPh link @ 25G b/sec over multimode fiber.
- At ECOC later that month, Intel demonstrated 25G b/sec SiPh transmission but at a much longer 820m.
But what really significant is Intel’s emphasis that a total systems approach, was needed to make SiPh a viable interconnect technology. That includes photonics, cables, connectors, and structured wiring/assembly which includes optical patch panels to interconnect servers in a rack.
Mario concluded by saying that Intel plans to make SiPh real and that the future for the technology was very bright. We take his words very seriously!
Closing Comment and Analysis:
This author has followed Intel closely since first applying for a job there in the summer of 1973. I’ve also worked for the company as a consultant in the late 1980s and mid 1990s. We have never before seen Intel pursue a research project for more than three years without either bringing it to market or killing it (neural computing was a late 1980s hot project that was killed as that market was not there- and still isn’t). SiPh is quite an exception to that practice as it’s been in the research phase at Intel for over 10 years!
But Intel may be announcing SiPh products very soon. This past January, they announced they’re working with Facebook on 100G b/sec rack interconnects for Data Centers.
And we couldn’t help notice this Intel job advertisement for a SiPh Market Development Manager.
Would Intel be hiring such a person if a product announcement was not forthcoming in the near future?
SiPh could be one of the most exciting developments in large Data Centers and HPC in years. It could aid, abet and accelerate the movement to cloud computing. The technology also has the potential to drastically change the architecture of compute, memory, storage and network equipment within the Data Center, as Joel Goergen of Cisco proposes. That would be creative destruction for Cisco who has a huge market in all types of Data Center equipment.
–>Stay tuned for more SiPh developments coming this year and next. We are watching all aspects of this technology very closely.
For a list of Intel’s SiPh research achievements please visit:
PANEL: Hybrid Silicon Laser Project
Intel and the University of California Santa Barbara (UCSB) announced the demonstration of the world’s first electrically driven Hybrid Silicon Laser. This device successfully integrates the light-emitting capabilities of Indium Phosphide with the light-routing and low cost advantages of silicon. The researchers believe that with this development, silicon photonic chips containing dozens or even hundreds of hybrid silicon lasers could someday be built using standard high-volume, low-cost silicon manufacturing techniques. This development addresses one of the last hurdles to producing low-cost, highly integrated silicon photonic chips for use inside and around PCs, Servers, and Data Centers.