Manvinder Singh, Redis | Robotics & AI Infrastructure Leaders



AI Summary

This video is part of the robotics and AI leader series recorded in Palo Alto, featuring Manvinder Singh, VP of AI Product Management at Redis. It explores how Redis is evolving its infrastructure to support AI applications, particularly around real-time data access and memory needed for AI agents and large language models (LLMs). Key points include Redis’s transition from traditional fast caching to semantic caching to reduce latency in AI inferences, vector search capabilities optimized for horizontal and vertical scaling, and managing state and memory for AI agents. The discussion also touches on challenges customers face like scaling inference, onboarding AI agents with enterprise data, and maintaining accuracy and guardrails against hallucination in AI outputs. Singh emphasizes Redis’s role as a data delivery platform complementing backend data lakes and operational databases, enabling hybrid cloud and on-premise deployments with a unified stack. The conversation covers Redis’s innovation in two-tier caching combining RAM and SSDs for cost-effective large-scale memory, the excitement around AI innovation culture at Redis led by original creator Salvatore Sanfilippo, and future architectural directions supporting developers building AI-driven applications with flexible, high performance Redis building blocks.