Vishal Shukla & Shekar Ayyar & Anand Raghavan | Robotics & AI Infrastructure Leaders
AI Summary
This video is a panel discussion from theCUBE featuring industry leaders from Aviz Networks, Arrcus, and Cisco, focusing on the evolving role of networking and data center architecture in supporting AI workloads, particularly in the context of large language models (LLMs) and distributed inference computing. Key points include the shift from AI training to inference at the edge, the importance of scalable and low-latency networking that integrates new hardware like DPUs and GPUs, and the need for secure, private AI infrastructure within enterprise and sovereign cloud environments. The discussion covers innovations in network protocols, hardware (such as NVIDIA’s GPUs and Broadcom’s switches), and software ecosystems like open source networking operating systems (SONiC) and Multi-Cloud Platforms (MCP) that facilitate AI operations. Panelists emphasize how AI can augment network operation through agentic systems for anomaly detection and automation, but caution that full automation is not yet practical. The role of telecom operators in leveraging their physical networks for distributed AI inference is highlighted as a major growth area. Security, privacy, and sovereignty are recurring themes, especially as AI deployments become more region-specific. Overall, the conversation underscores the critical interplay between advanced networking technology and AI to power the next generation of data center and edge computing.