During the launch event for the Intel Core Ultra and 5th Gen Xeon datacenter chips in New York City, Intel CEO Pat Gelsinger expressed a strong stance against Nvidia's CUDA technology, emphasizing the increasing significance of inference over training for AI. Gelsinger conveyed that the industry is driven to reduce the dominance of CUDA in training, alluding to initiatives by MLIR, Google, and OpenAI to transition to a more open "Pythonic programming layer" for AI training.
Intel's approach focuses on emphasizing the importance of inference, highlighting that once a model is trained, there is no dependency on CUDA. Gelsinger introduced Intel's Gaudi 3 as a solution for inference tasks, which the company believes will perform effectively with Xeon and edge PCs. While not disregarding training, Intel views the inference market as paramount.
Sandra Rivera, Intel's executive vice president and general manager of the Data Center and AI Group, underlined Intel's capability to scale AI efforts from the data center to the PC, positioning the company as a preferred partner due to its volume production capacity.
Gelsinger outlined a comprehensive competitive strategy, aiming to address 100% of the datacenter AI Total Addressable Market (TAM) through leadership in hardware, including accelerators, and as a foundry player. Intel intends to pursue internal hardware development opportunities as well as collaborate commercially with industry players such as NVIDIA and AMD.
While Gelsinger's approach is ambitious, the true impact on CUDA's dominance in the AI landscape is yet to be determined as applications for the newly launched Intel chips, and those of its competitors, continue to evolve.