Scientists from Tsinghua University in China have introduced an innovative photonic computing method that can significantly improve the training processes of optical neural networks, China Daily reported on August 10.
This achievement, along with the launch of the lightweight Taichi-II chip, could offer a faster and more energy-efficient alternative for training large language models. Chinese researchers Lu Fang and Dai Qionghai, along with their team, published their results in a paper titled “Fully forward mode (FFM) training for optical neural networks” in the journal Nature. The paper, published Wednesday, highlights the potential for development in theoretical and applied areas including deep neural networks, hypersensitive perception, and topological photonics.
The current standard for training AI models relies heavily on emulation of digital computers, which is limited by high power requirements and hardware (GPU) dependency. The FFM learning method developed at Tsinghua allows these labor-intensive learning processes to be carried out directly on the physical system, significantly reducing the limitations of numerical simulations, the research team reports.
Although photonic computing provides high computing power with lower energy consumption compared to traditional methods, it has been limited by precomputing. The precise and complex calculations required for advanced AI training still rely heavily on GPUs, explained Liu Gang, chief economist at the China Institute for Next Generation AI Development Strategies.
The new technology developed by the Tsinghua team promises to overcome these limitations, potentially eliminating the need for extensive use of GPUs and leading to more efficient and accurate training of AI models, Liu added.
The first-generation Taichi chip, also developed by Tsinghua University and launched in April, was described in the journal Nature. The chip uses photonic integrated circuits that process data using light signals instead of electrical ones, enabling ultrafast data transfer while significantly reducing power consumption.
Compared to its predecessor, the Taichi-II chip was specifically designed to train large-scale neural networks in situ using light, filling a critical gap in photonic computing. This innovation is expected to accelerate the training of AI models and enable breakthroughs in areas such as high-performance intelligent imaging and efficient analysis of topological photonic systems.
Energy consumption in the AI industry remains a major concern. According to Norwegian research institute Rystad Energy, the combined expansion of traditional and AI data centers and chip manufacturing facilities in the U.S. is projected to increase energy demand by 177 terawatt-hours (TWh) from 2023 to 2030, reaching a total of 307 TWh. By comparison, utilities generated 4,178 TWh of electricity in the United States in 2023, according to the U.S. Energy Information Administration.
Source: Rossa Primavera

I am Michael Melvin, an experienced news writer with a passion for uncovering stories and bringing them to the public. I have been working in the news industry for over five years now, and my work has been published on multiple websites. As an author at 24 News Reporters, I cover world section of current events stories that are both informative and captivating to read.