The Coherence Trap Why LLMs Feel Smart (But Aren’t Thinking) - Travis Frisinger



AI Summary

In this talk, Travis Fry Singinger explores the concept of the “coherency trap” in the context of large language models (LLMs). He reflects on his initial experiences with models like GPT-3.5 and GPT-4, emphasizing the progression from disappointment to a sense of understanding and capability with GPT-4. Singinger shares insights from his experiments with AI, including creating a blog and producing a concept album with AI assistance, highlighting the collaborative potential of AI. He introduces the “AI decision loop” framework, focusing on the importance of framing problems, generating outputs, and iterating on those responses. He critiques the traditional view of AI as intelligent, proposing instead that coherence—not intelligence—is the key property of LLMs, which are better understood as systems that navigate latent spaces to produce relevant outputs. Singinger concludes by advocating for a shift in how we design interactions with AI, moving towards a focus on coherence rather than the pursuit of intelligence.