In a bold challenge to US giant Nvidia’s dominance in artificial intelligence (AI) hardware, Chinese researchers have trained a cutting-edge video-generation model on an off-the shelf industrial chip – outperforming high-end GPUs in both speed and efficiency.
Their system, FlightVGM, recorded a 30 per cent performance boost and had an energy efficiency that was 4½ times greater than Nvidia’s flagship RTX 3090 GPU – all while running on the widely available V80 FPGA chip from Advanced Micro Devices (AMD), another leading US semiconductor firm.
The innovation earned top honours at the prestigious FPGA 2025 conference which concluded on March 1. The win marked the first time a mainland Chinese team had claimed the event’s Best Paper Award, signalling a seismic shift in the global race to optimise AI hardware.
Developed by scientists from Shanghai Jiao Tong University, Tsinghua University and Beijing-based start-up Infinigence-AI, the model could redefine how industries deploy cost-effective, energy-efficient AI systems, from robotic controls to autonomous vehicles.
FPGAs or field-programmable gate arrays, are programmable semiconductor devices that allow post-manufacturing modifications to their circuitry and functionality. In contrast, conventional chips like CPUs or central processing units, GPUs or graphics processing units, and ASICs or application-specific integrated circuits have fixed functionalities once fabricated.
In video-generation and general computing, FPGAs and Nvidia GPUs each have distinct advantages. FPGAs offer customisable architecture tailored to specific applications, resulting in higher energy efficiency and lower latency. Meanwhile, Nvidia GPUs, known for their massive parallel computing power, excel in processing large-scale data and handling complex computational tasks.
Building on previous research, the Chinese team developed FlightVGM, the first FPGA-trained video-generation AI model. Through innovations in data architecture and scheduling methods, FlightVGM achieved computational performance that could outpace GPUs.