Photonic Chips Finally Scale Up to Power Massive AI Models
Source PublicationNature Communications
Primary AuthorsZhou, Jiang, Xu et al.

In the quest for faster, greener artificial intelligence, photonic computing—using light instead of electricity—has long been a promising contender. However, these systems face a significant hurdle: they are analogue by nature. Unlike digital systems, analogue signals suffer from noise that accumulates rapidly as data passes through the layers of a neural network. Until now, this 'error accumulation' restricted optical chips to a depth of only about ten layers, rendering them too shallow for modern, complex AI tasks.
A new study has identified the root cause of this instability: ‘propagation redundancies’. By introducing precise perturbations on the chip to decouple these computational correlations, researchers have effectively eliminated the redundancy. This led to the development of the single-layer photonic computing (SLiM) chip, a device that is robust against errors and can extend spatial depth from the millimetre scale to hundreds of metres.
The performance implications are vast. The team experimentally constructed a 100-layer neural network for image classification and even successfully ran Large Language Models (LLMs) with up to 640 layers for image generation. Operating at a 10-GHz data rate, these deep optical networks achieved accuracy comparable to ideal digital simulations. This breakthrough suggests that energy-efficient analogue hardware is finally ready to handle the state-of-the-art deep learning models that drive today's AI era.